Test Report: KVM_Linux_crio 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (29/320)

Order failed test Duration
33 TestAddons/parallel/Registry 74.06
34 TestAddons/parallel/Ingress 150.55
36 TestAddons/parallel/MetricsServer 346.52
164 TestMultiControlPlane/serial/StopSecondaryNode 141.76
166 TestMultiControlPlane/serial/RestartSecondaryNode 61.18
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.89
171 TestMultiControlPlane/serial/StopCluster 141.62
231 TestMultiNode/serial/RestartKeepsNodes 331.82
233 TestMultiNode/serial/StopMultiNode 141.21
240 TestPreload 272.1
248 TestKubernetesUpgrade 406.5
290 TestStartStop/group/old-k8s-version/serial/FirstStart 280.44
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.3
303 TestStartStop/group/embed-certs/serial/Stop 139.23
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/old-k8s-version/serial/DeployApp 0.54
318 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 88.59
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
323 TestStartStop/group/no-preload/serial/Stop 139.07
326 TestStartStop/group/old-k8s-version/serial/SecondStart 690.2
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.21
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.34
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.26
332 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.51
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 456.06
334 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 543.18
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 336.81
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 196.71
x
+
TestAddons/parallel/Registry (74.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.09605ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004869463s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004763905s
addons_test.go:342: (dbg) Run:  kubectl --context addons-694635 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-694635 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-694635 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.091988175s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-694635 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 ip
2024/09/12 21:41:21 [DEBUG] GET http://192.168.39.67:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-694635 -n addons-694635
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 logs -n 25: (1.343305115s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-618378                                                                     | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | -o=json --download-only                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | -p download-only-976166                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-976166                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-618378                                                                     | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-976166                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | binary-mirror-318498                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39999                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-318498                                                                     | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-694635 --wait=true                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-694635 ssh cat                                                                       | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | /opt/local-path-provisioner/pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| ip      | addons-694635 ip                                                                            | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:47.475866   13842 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:47.475993   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476005   13842 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:47.476012   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476186   13842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:29:47.476836   13842 out.go:352] Setting JSON to false
	I0912 21:29:47.477752   13842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":729,"bootTime":1726175858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:47.477818   13842 start.go:139] virtualization: kvm guest
	I0912 21:29:47.479869   13842 out.go:177] * [addons-694635] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:47.481136   13842 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:29:47.481139   13842 notify.go:220] Checking for updates...
	I0912 21:29:47.483542   13842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:47.484839   13842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:29:47.486133   13842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.487896   13842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:29:47.489241   13842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:29:47.490764   13842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:47.523002   13842 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:29:47.524034   13842 start.go:297] selected driver: kvm2
	I0912 21:29:47.524046   13842 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:29:47.524060   13842 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:29:47.524980   13842 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.525102   13842 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:29:47.540324   13842 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:29:47.540407   13842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:47.540684   13842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:29:47.540767   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:29:47.540781   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:29:47.540792   13842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:47.540869   13842 start.go:340] cluster config:
	{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:47.540994   13842 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.542738   13842 out.go:177] * Starting "addons-694635" primary control-plane node in "addons-694635" cluster
	I0912 21:29:47.543940   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:29:47.543977   13842 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:47.543985   13842 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:47.544089   13842 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:29:47.544102   13842 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:29:47.544526   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:29:47.544557   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json: {Name:mk33fa1e209cbe67cd91a1b792a3ca9ac0ed48ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:47.544694   13842 start.go:360] acquireMachinesLock for addons-694635: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:29:47.544742   13842 start.go:364] duration metric: took 34.718µs to acquireMachinesLock for "addons-694635"
	I0912 21:29:47.544765   13842 start.go:93] Provisioning new machine with config: &{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:29:47.544840   13842 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:29:47.546289   13842 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 21:29:47.546444   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:29:47.546482   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:29:47.560635   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0912 21:29:47.561053   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:29:47.561645   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:29:47.561668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:29:47.562020   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:29:47.562207   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:29:47.562346   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:29:47.562487   13842 start.go:159] libmachine.API.Create for "addons-694635" (driver="kvm2")
	I0912 21:29:47.562506   13842 client.go:168] LocalClient.Create starting
	I0912 21:29:47.562537   13842 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:29:47.644946   13842 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:29:47.782363   13842 main.go:141] libmachine: Running pre-create checks...
	I0912 21:29:47.782383   13842 main.go:141] libmachine: (addons-694635) Calling .PreCreateCheck
	I0912 21:29:47.782856   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:29:47.783275   13842 main.go:141] libmachine: Creating machine...
	I0912 21:29:47.783290   13842 main.go:141] libmachine: (addons-694635) Calling .Create
	I0912 21:29:47.783442   13842 main.go:141] libmachine: (addons-694635) Creating KVM machine...
	I0912 21:29:47.784608   13842 main.go:141] libmachine: (addons-694635) DBG | found existing default KVM network
	I0912 21:29:47.785304   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.785155   13864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0912 21:29:47.785337   13842 main.go:141] libmachine: (addons-694635) DBG | created network xml: 
	I0912 21:29:47.785348   13842 main.go:141] libmachine: (addons-694635) DBG | <network>
	I0912 21:29:47.785361   13842 main.go:141] libmachine: (addons-694635) DBG |   <name>mk-addons-694635</name>
	I0912 21:29:47.785392   13842 main.go:141] libmachine: (addons-694635) DBG |   <dns enable='no'/>
	I0912 21:29:47.785413   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785428   13842 main.go:141] libmachine: (addons-694635) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:29:47.785441   13842 main.go:141] libmachine: (addons-694635) DBG |     <dhcp>
	I0912 21:29:47.785456   13842 main.go:141] libmachine: (addons-694635) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:29:47.785466   13842 main.go:141] libmachine: (addons-694635) DBG |     </dhcp>
	I0912 21:29:47.785476   13842 main.go:141] libmachine: (addons-694635) DBG |   </ip>
	I0912 21:29:47.785490   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785501   13842 main.go:141] libmachine: (addons-694635) DBG | </network>
	I0912 21:29:47.785509   13842 main.go:141] libmachine: (addons-694635) DBG | 
	I0912 21:29:47.790883   13842 main.go:141] libmachine: (addons-694635) DBG | trying to create private KVM network mk-addons-694635 192.168.39.0/24...
	I0912 21:29:47.856566   13842 main.go:141] libmachine: (addons-694635) DBG | private KVM network mk-addons-694635 192.168.39.0/24 created
	I0912 21:29:47.856589   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.856546   13864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.856604   13842 main.go:141] libmachine: (addons-694635) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:47.856615   13842 main.go:141] libmachine: (addons-694635) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:29:47.856703   13842 main.go:141] libmachine: (addons-694635) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:29:48.103210   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.103069   13864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa...
	I0912 21:29:48.158267   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158115   13864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk...
	I0912 21:29:48.158303   13842 main.go:141] libmachine: (addons-694635) DBG | Writing magic tar header
	I0912 21:29:48.158321   13842 main.go:141] libmachine: (addons-694635) DBG | Writing SSH key tar header
	I0912 21:29:48.158334   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158221   13864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:48.158344   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635
	I0912 21:29:48.158353   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:29:48.158362   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 (perms=drwx------)
	I0912 21:29:48.158376   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:29:48.158397   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:29:48.158411   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:48.158423   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:29:48.158433   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:29:48.158450   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:29:48.158464   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:29:48.158476   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:29:48.158486   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:29:48.158502   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:48.158514   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home
	I0912 21:29:48.158532   13842 main.go:141] libmachine: (addons-694635) DBG | Skipping /home - not owner
	I0912 21:29:48.159530   13842 main.go:141] libmachine: (addons-694635) define libvirt domain using xml: 
	I0912 21:29:48.159561   13842 main.go:141] libmachine: (addons-694635) <domain type='kvm'>
	I0912 21:29:48.159569   13842 main.go:141] libmachine: (addons-694635)   <name>addons-694635</name>
	I0912 21:29:48.159576   13842 main.go:141] libmachine: (addons-694635)   <memory unit='MiB'>4000</memory>
	I0912 21:29:48.159582   13842 main.go:141] libmachine: (addons-694635)   <vcpu>2</vcpu>
	I0912 21:29:48.159593   13842 main.go:141] libmachine: (addons-694635)   <features>
	I0912 21:29:48.159601   13842 main.go:141] libmachine: (addons-694635)     <acpi/>
	I0912 21:29:48.159611   13842 main.go:141] libmachine: (addons-694635)     <apic/>
	I0912 21:29:48.159621   13842 main.go:141] libmachine: (addons-694635)     <pae/>
	I0912 21:29:48.159629   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.159634   13842 main.go:141] libmachine: (addons-694635)   </features>
	I0912 21:29:48.159641   13842 main.go:141] libmachine: (addons-694635)   <cpu mode='host-passthrough'>
	I0912 21:29:48.159688   13842 main.go:141] libmachine: (addons-694635)   
	I0912 21:29:48.159713   13842 main.go:141] libmachine: (addons-694635)   </cpu>
	I0912 21:29:48.159737   13842 main.go:141] libmachine: (addons-694635)   <os>
	I0912 21:29:48.159750   13842 main.go:141] libmachine: (addons-694635)     <type>hvm</type>
	I0912 21:29:48.159770   13842 main.go:141] libmachine: (addons-694635)     <boot dev='cdrom'/>
	I0912 21:29:48.159783   13842 main.go:141] libmachine: (addons-694635)     <boot dev='hd'/>
	I0912 21:29:48.159802   13842 main.go:141] libmachine: (addons-694635)     <bootmenu enable='no'/>
	I0912 21:29:48.159818   13842 main.go:141] libmachine: (addons-694635)   </os>
	I0912 21:29:48.159831   13842 main.go:141] libmachine: (addons-694635)   <devices>
	I0912 21:29:48.159842   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='cdrom'>
	I0912 21:29:48.159866   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/boot2docker.iso'/>
	I0912 21:29:48.159877   13842 main.go:141] libmachine: (addons-694635)       <target dev='hdc' bus='scsi'/>
	I0912 21:29:48.159885   13842 main.go:141] libmachine: (addons-694635)       <readonly/>
	I0912 21:29:48.159896   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159907   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='disk'>
	I0912 21:29:48.159916   13842 main.go:141] libmachine: (addons-694635)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:29:48.159932   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk'/>
	I0912 21:29:48.159943   13842 main.go:141] libmachine: (addons-694635)       <target dev='hda' bus='virtio'/>
	I0912 21:29:48.159953   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159969   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.159982   13842 main.go:141] libmachine: (addons-694635)       <source network='mk-addons-694635'/>
	I0912 21:29:48.159992   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160001   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160011   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.160022   13842 main.go:141] libmachine: (addons-694635)       <source network='default'/>
	I0912 21:29:48.160032   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160043   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160051   13842 main.go:141] libmachine: (addons-694635)     <serial type='pty'>
	I0912 21:29:48.160066   13842 main.go:141] libmachine: (addons-694635)       <target port='0'/>
	I0912 21:29:48.160077   13842 main.go:141] libmachine: (addons-694635)     </serial>
	I0912 21:29:48.160089   13842 main.go:141] libmachine: (addons-694635)     <console type='pty'>
	I0912 21:29:48.160108   13842 main.go:141] libmachine: (addons-694635)       <target type='serial' port='0'/>
	I0912 21:29:48.160121   13842 main.go:141] libmachine: (addons-694635)     </console>
	I0912 21:29:48.160132   13842 main.go:141] libmachine: (addons-694635)     <rng model='virtio'>
	I0912 21:29:48.160143   13842 main.go:141] libmachine: (addons-694635)       <backend model='random'>/dev/random</backend>
	I0912 21:29:48.160151   13842 main.go:141] libmachine: (addons-694635)     </rng>
	I0912 21:29:48.160157   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160168   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160176   13842 main.go:141] libmachine: (addons-694635)   </devices>
	I0912 21:29:48.160185   13842 main.go:141] libmachine: (addons-694635) </domain>
	I0912 21:29:48.160195   13842 main.go:141] libmachine: (addons-694635) 
	I0912 21:29:48.165998   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:32:e5:de in network default
	I0912 21:29:48.166596   13842 main.go:141] libmachine: (addons-694635) Ensuring networks are active...
	I0912 21:29:48.166616   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:48.167233   13842 main.go:141] libmachine: (addons-694635) Ensuring network default is active
	I0912 21:29:48.167509   13842 main.go:141] libmachine: (addons-694635) Ensuring network mk-addons-694635 is active
	I0912 21:29:48.167964   13842 main.go:141] libmachine: (addons-694635) Getting domain xml...
	I0912 21:29:48.168724   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:49.564332   13842 main.go:141] libmachine: (addons-694635) Waiting to get IP...
	I0912 21:29:49.565210   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.565680   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.565753   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.565686   13864 retry.go:31] will retry after 259.088458ms: waiting for machine to come up
	I0912 21:29:49.826131   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.826631   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.826660   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.826579   13864 retry.go:31] will retry after 330.128851ms: waiting for machine to come up
	I0912 21:29:50.158148   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.158574   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.158644   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.158552   13864 retry.go:31] will retry after 438.081447ms: waiting for machine to come up
	I0912 21:29:50.598323   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.598829   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.598897   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.598822   13864 retry.go:31] will retry after 407.106138ms: waiting for machine to come up
	I0912 21:29:51.007259   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.007718   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.007758   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.007668   13864 retry.go:31] will retry after 621.06803ms: waiting for machine to come up
	I0912 21:29:51.630684   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.631143   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.631165   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.631112   13864 retry.go:31] will retry after 606.154083ms: waiting for machine to come up
	I0912 21:29:52.238827   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:52.239319   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:52.239351   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:52.239251   13864 retry.go:31] will retry after 1.053486982s: waiting for machine to come up
	I0912 21:29:53.294067   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:53.294469   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:53.294496   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:53.294420   13864 retry.go:31] will retry after 1.050950177s: waiting for machine to come up
	I0912 21:29:54.347197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:54.347603   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:54.347631   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:54.347539   13864 retry.go:31] will retry after 1.24941056s: waiting for machine to come up
	I0912 21:29:55.598907   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:55.599382   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:55.599413   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:55.599328   13864 retry.go:31] will retry after 2.237205326s: waiting for machine to come up
	I0912 21:29:57.838937   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:57.839483   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:57.839506   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:57.839455   13864 retry.go:31] will retry after 2.152344085s: waiting for machine to come up
	I0912 21:29:59.994815   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:59.995133   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:59.995155   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:59.995091   13864 retry.go:31] will retry after 2.540765126s: waiting for machine to come up
	I0912 21:30:02.536979   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:02.537427   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:02.537453   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:02.537360   13864 retry.go:31] will retry after 3.772056123s: waiting for machine to come up
	I0912 21:30:06.313642   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:06.314016   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:06.314033   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:06.313980   13864 retry.go:31] will retry after 4.542886768s: waiting for machine to come up
	I0912 21:30:10.861222   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861712   13842 main.go:141] libmachine: (addons-694635) Found IP for machine: 192.168.39.67
	I0912 21:30:10.861742   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has current primary IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861751   13842 main.go:141] libmachine: (addons-694635) Reserving static IP address...
	I0912 21:30:10.862048   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find host DHCP lease matching {name: "addons-694635", mac: "52:54:00:6b:43:77", ip: "192.168.39.67"} in network mk-addons-694635
	I0912 21:30:10.932572   13842 main.go:141] libmachine: (addons-694635) Reserved static IP address: 192.168.39.67
	I0912 21:30:10.932602   13842 main.go:141] libmachine: (addons-694635) Waiting for SSH to be available...
	I0912 21:30:10.932612   13842 main.go:141] libmachine: (addons-694635) DBG | Getting to WaitForSSH function...
	I0912 21:30:10.935290   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935838   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:10.935873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935964   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH client type: external
	I0912 21:30:10.935991   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa (-rw-------)
	I0912 21:30:10.936035   13842 main.go:141] libmachine: (addons-694635) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:30:10.936049   13842 main.go:141] libmachine: (addons-694635) DBG | About to run SSH command:
	I0912 21:30:10.936084   13842 main.go:141] libmachine: (addons-694635) DBG | exit 0
	I0912 21:30:11.069676   13842 main.go:141] libmachine: (addons-694635) DBG | SSH cmd err, output: <nil>: 
	I0912 21:30:11.070005   13842 main.go:141] libmachine: (addons-694635) KVM machine creation complete!
	I0912 21:30:11.070347   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:11.070852   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071054   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071193   13842 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:30:11.071208   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:11.072333   13842 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:30:11.072351   13842 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:30:11.072359   13842 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:30:11.072367   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.074613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.074932   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.074958   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.075073   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.075372   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075564   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.075904   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.076074   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.076085   13842 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:30:11.184974   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.184996   13842 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:30:11.185003   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.187718   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188031   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.188060   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188249   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.188446   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.188821   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.188967   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.188978   13842 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:30:11.297959   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:30:11.298022   13842 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:30:11.298032   13842 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:30:11.298042   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298318   13842 buildroot.go:166] provisioning hostname "addons-694635"
	I0912 21:30:11.298346   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.301198   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301546   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.301584   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301725   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.301923   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302369   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.302563   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.302737   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.302753   13842 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-694635 && echo "addons-694635" | sudo tee /etc/hostname
	I0912 21:30:11.426945   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694635
	
	I0912 21:30:11.426972   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.429942   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.430333   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430492   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.430677   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430844   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430998   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.431169   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.431330   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.431345   13842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-694635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-694635/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-694635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:30:11.549812   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.549842   13842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:30:11.549859   13842 buildroot.go:174] setting up certificates
	I0912 21:30:11.549868   13842 provision.go:84] configureAuth start
	I0912 21:30:11.549876   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.550203   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:11.552873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553191   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.553219   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553451   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.555633   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.555953   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.555985   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.556111   13842 provision.go:143] copyHostCerts
	I0912 21:30:11.556205   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:30:11.556362   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:30:11.556467   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:30:11.556548   13842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.addons-694635 san=[127.0.0.1 192.168.39.67 addons-694635 localhost minikube]
	I0912 21:30:11.859350   13842 provision.go:177] copyRemoteCerts
	I0912 21:30:11.859407   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:30:11.859439   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.862041   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862347   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.862395   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862533   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.862736   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.862883   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.863033   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:11.947343   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:30:11.971801   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:30:11.994695   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:30:12.016706   13842 provision.go:87] duration metric: took 466.828028ms to configureAuth
	I0912 21:30:12.016730   13842 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:30:12.016881   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:12.016945   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.019830   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020115   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.020139   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020268   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.020572   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020764   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020928   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.021133   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.021291   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.021305   13842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:30:12.242709   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:30:12.242730   13842 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:30:12.242738   13842 main.go:141] libmachine: (addons-694635) Calling .GetURL
	I0912 21:30:12.243884   13842 main.go:141] libmachine: (addons-694635) DBG | Using libvirt version 6000000
	I0912 21:30:12.245945   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246318   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.246350   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246533   13842 main.go:141] libmachine: Docker is up and running!
	I0912 21:30:12.246556   13842 main.go:141] libmachine: Reticulating splines...
	I0912 21:30:12.246564   13842 client.go:171] duration metric: took 24.684052058s to LocalClient.Create
	I0912 21:30:12.246588   13842 start.go:167] duration metric: took 24.684100435s to libmachine.API.Create "addons-694635"
	I0912 21:30:12.246601   13842 start.go:293] postStartSetup for "addons-694635" (driver="kvm2")
	I0912 21:30:12.246615   13842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:30:12.246639   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.246870   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:30:12.246905   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.249197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249498   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.249534   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.249879   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.250020   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.250162   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.335312   13842 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:30:12.339024   13842 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:30:12.339044   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:30:12.339112   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:30:12.339135   13842 start.go:296] duration metric: took 92.526012ms for postStartSetup
	I0912 21:30:12.339176   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:12.339703   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.342217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342565   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.342593   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342850   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:30:12.343012   13842 start.go:128] duration metric: took 24.798163033s to createHost
	I0912 21:30:12.343032   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.345464   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345807   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.345844   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345954   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.346123   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346247   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346385   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.346509   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.346686   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.346697   13842 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:30:12.457929   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726176612.428880125
	
	I0912 21:30:12.457953   13842 fix.go:216] guest clock: 1726176612.428880125
	I0912 21:30:12.457962   13842 fix.go:229] Guest: 2024-09-12 21:30:12.428880125 +0000 UTC Remote: 2024-09-12 21:30:12.34302243 +0000 UTC m=+24.902400367 (delta=85.857695ms)
	I0912 21:30:12.458006   13842 fix.go:200] guest clock delta is within tolerance: 85.857695ms
	I0912 21:30:12.458017   13842 start.go:83] releasing machines lock for "addons-694635", held for 24.913263111s
	I0912 21:30:12.458045   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.458281   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.460843   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461195   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.461214   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461345   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461780   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461924   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.462008   13842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:30:12.462054   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.462099   13842 ssh_runner.go:195] Run: cat /version.json
	I0912 21:30:12.462122   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.465318   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466089   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466118   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466258   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466291   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466484   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.466652   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.466686   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466711   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466774   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.466851   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466973   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.467142   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.467278   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.577120   13842 ssh_runner.go:195] Run: systemctl --version
	I0912 21:30:12.582974   13842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:30:12.745818   13842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:30:12.751421   13842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:30:12.751490   13842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:30:12.767475   13842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:30:12.767505   13842 start.go:495] detecting cgroup driver to use...
	I0912 21:30:12.767618   13842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:30:12.783679   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:30:12.797513   13842 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:30:12.797586   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:30:12.810747   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:30:12.824037   13842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:30:12.933703   13842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:30:13.069024   13842 docker.go:233] disabling docker service ...
	I0912 21:30:13.069119   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:30:13.082671   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:30:13.095050   13842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:30:13.233647   13842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:30:13.370107   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:30:13.383851   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:30:13.402794   13842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:30:13.402859   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.413117   13842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:30:13.413207   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.424050   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.434819   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.446105   13842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:30:13.457702   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.468902   13842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.486556   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.496994   13842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:30:13.506290   13842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:30:13.506366   13842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:30:13.518440   13842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:30:13.528117   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:13.648177   13842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:30:13.743367   13842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:30:13.743454   13842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:30:13.747977   13842 start.go:563] Will wait 60s for crictl version
	I0912 21:30:13.748061   13842 ssh_runner.go:195] Run: which crictl
	I0912 21:30:13.751466   13842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:30:13.795727   13842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:30:13.795864   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.823080   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.851860   13842 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:30:13.853473   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:13.855932   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856224   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:13.856252   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856515   13842 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:30:13.860421   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:13.872141   13842 kubeadm.go:883] updating cluster {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:30:13.872251   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:30:13.872300   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:13.904455   13842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:30:13.904513   13842 ssh_runner.go:195] Run: which lz4
	I0912 21:30:13.908020   13842 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:30:13.912184   13842 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:30:13.912211   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 21:30:15.114051   13842 crio.go:462] duration metric: took 1.206056393s to copy over tarball
	I0912 21:30:15.114132   13842 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:30:17.173858   13842 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059695045s)
	I0912 21:30:17.173886   13842 crio.go:469] duration metric: took 2.059804143s to extract the tarball
	I0912 21:30:17.173896   13842 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:30:17.209405   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:17.248658   13842 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 21:30:17.248678   13842 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:30:17.248685   13842 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.1 crio true true} ...
	I0912 21:30:17.248808   13842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-694635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:30:17.248877   13842 ssh_runner.go:195] Run: crio config
	I0912 21:30:17.290568   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:17.290590   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:17.290601   13842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:30:17.290621   13842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-694635 NodeName:addons-694635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:30:17.290786   13842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-694635"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:30:17.290849   13842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:30:17.300055   13842 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:30:17.300152   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:30:17.308986   13842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0912 21:30:17.325445   13842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:30:17.340762   13842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0912 21:30:17.356821   13842 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0912 21:30:17.360484   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:17.371412   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:17.492721   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:17.509813   13842 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635 for IP: 192.168.39.67
	I0912 21:30:17.509838   13842 certs.go:194] generating shared ca certs ...
	I0912 21:30:17.509857   13842 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.510001   13842 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:30:17.588276   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt ...
	I0912 21:30:17.588302   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt: {Name:mk816935852d33e60449d1c6a4d94ec7ab82ac30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588455   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key ...
	I0912 21:30:17.588466   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key: {Name:mk9dc9de662fbb5903c290d7926fa7232953ae33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588536   13842 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:30:17.693721   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt ...
	I0912 21:30:17.693751   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt: {Name:mk3263e222fdf8339a04083239eee50b749554b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693895   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key ...
	I0912 21:30:17.693905   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key: {Name:mk05f7726618d659b90a4327bb74fa26385a63bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693978   13842 certs.go:256] generating profile certs ...
	I0912 21:30:17.694024   13842 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key
	I0912 21:30:17.694037   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt with IP's: []
	I0912 21:30:18.018134   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt ...
	I0912 21:30:18.018169   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: {Name:mk10ce384e125f2b7ec307089833f9de35a73420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018339   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key ...
	I0912 21:30:18.018350   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key: {Name:mk451874420166276937e43f0b93cd8fbad875f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018420   13842 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54
	I0912 21:30:18.018438   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67]
	I0912 21:30:18.261062   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 ...
	I0912 21:30:18.261090   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54: {Name:mkd62b1b67056d42a6c142ee6c71845182d8908d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261238   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 ...
	I0912 21:30:18.261252   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54: {Name:mk7c82ddc89e4a1cf8c648222b96704d6a1d1dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261330   13842 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt
	I0912 21:30:18.261402   13842 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key
	I0912 21:30:18.261446   13842 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key
	I0912 21:30:18.261463   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt with IP's: []
	I0912 21:30:18.451474   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt ...
	I0912 21:30:18.451506   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt: {Name:mk0f640d1553a36669ab6e6b7b695492f179b963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451692   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key ...
	I0912 21:30:18.451707   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key: {Name:mk18108f1bab56e6e4bd321dfe7a25d4858d7cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451898   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:30:18.451934   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:30:18.451961   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:30:18.451983   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:30:18.452546   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:30:18.477574   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:30:18.499725   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:30:18.521000   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:30:18.542359   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:30:18.563704   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:30:18.585274   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:30:18.606928   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:30:18.629281   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:30:18.650974   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:30:18.666875   13842 ssh_runner.go:195] Run: openssl version
	I0912 21:30:18.672260   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:30:18.682723   13842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.686978   13842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.687042   13842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.692565   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:30:18.702818   13842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:30:18.706358   13842 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:30:18.706403   13842 kubeadm.go:392] StartCluster: {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:18.706469   13842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:30:18.706505   13842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:30:18.740797   13842 cri.go:89] found id: ""
	I0912 21:30:18.740875   13842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:30:18.750323   13842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:30:18.760198   13842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:30:18.771699   13842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:30:18.771722   13842 kubeadm.go:157] found existing configuration files:
	
	I0912 21:30:18.771768   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:30:18.780639   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:30:18.780710   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:30:18.790136   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:30:18.798881   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:30:18.798933   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:30:18.807668   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.815937   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:30:18.815991   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.824796   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:30:18.833290   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:30:18.833349   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:30:18.842109   13842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:30:18.894082   13842 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:30:18.894163   13842 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:30:18.987148   13842 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:30:18.987303   13842 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:30:18.987452   13842 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:30:18.997399   13842 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:30:19.070004   13842 out.go:235]   - Generating certificates and keys ...
	I0912 21:30:19.070107   13842 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:30:19.070229   13842 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:30:19.148000   13842 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:30:19.614691   13842 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:30:19.901914   13842 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:30:19.979789   13842 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:30:20.166978   13842 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:30:20.167130   13842 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.264957   13842 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:30:20.265097   13842 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.466176   13842 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:30:20.696253   13842 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:30:20.807177   13842 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:30:20.807284   13842 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:30:20.974731   13842 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:30:21.105184   13842 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:30:21.174341   13842 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:30:21.244405   13842 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:30:21.769255   13842 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:30:21.769831   13842 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:30:21.772293   13842 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:30:21.774278   13842 out.go:235]   - Booting up control plane ...
	I0912 21:30:21.774387   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:30:21.774523   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:30:21.774628   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:30:21.791849   13842 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:30:21.798525   13842 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:30:21.798599   13842 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:30:21.939016   13842 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:30:21.939132   13842 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:30:22.439761   13842 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.995176ms
	I0912 21:30:22.439860   13842 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:30:27.939433   13842 kubeadm.go:310] [api-check] The API server is healthy after 5.502232123s
	I0912 21:30:27.957923   13842 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:30:27.974582   13842 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:30:28.004043   13842 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:30:28.004250   13842 kubeadm.go:310] [mark-control-plane] Marking the node addons-694635 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:30:28.022686   13842 kubeadm.go:310] [bootstrap-token] Using token: v7rbq6.ajeibt3p6xzx9rx5
	I0912 21:30:28.024134   13842 out.go:235]   - Configuring RBAC rules ...
	I0912 21:30:28.024266   13842 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:30:28.029565   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:30:28.040289   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:30:28.043786   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:30:28.047040   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:30:28.051390   13842 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:30:28.352753   13842 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:30:28.795025   13842 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:30:29.351438   13842 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:30:29.352611   13842 kubeadm.go:310] 
	I0912 21:30:29.352681   13842 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:30:29.352688   13842 kubeadm.go:310] 
	I0912 21:30:29.352768   13842 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:30:29.352777   13842 kubeadm.go:310] 
	I0912 21:30:29.352807   13842 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:30:29.352905   13842 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:30:29.352995   13842 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:30:29.353009   13842 kubeadm.go:310] 
	I0912 21:30:29.353111   13842 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:30:29.353127   13842 kubeadm.go:310] 
	I0912 21:30:29.353199   13842 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:30:29.353208   13842 kubeadm.go:310] 
	I0912 21:30:29.353287   13842 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:30:29.353390   13842 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:30:29.353500   13842 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:30:29.353511   13842 kubeadm.go:310] 
	I0912 21:30:29.353631   13842 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:30:29.353759   13842 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:30:29.353776   13842 kubeadm.go:310] 
	I0912 21:30:29.353851   13842 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.353941   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 21:30:29.353960   13842 kubeadm.go:310] 	--control-plane 
	I0912 21:30:29.353966   13842 kubeadm.go:310] 
	I0912 21:30:29.354039   13842 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:30:29.354045   13842 kubeadm.go:310] 
	I0912 21:30:29.354116   13842 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.354200   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 21:30:29.355833   13842 kubeadm.go:310] W0912 21:30:18.865667     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356162   13842 kubeadm.go:310] W0912 21:30:18.867599     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356254   13842 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:30:29.356325   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:29.356345   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:29.358563   13842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:30:29.360118   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:30:29.371250   13842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:30:29.390372   13842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:30:29.390461   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-694635 minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-694635 minikube.k8s.io/primary=true
	I0912 21:30:29.390464   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538333   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538368   13842 ops.go:34] apiserver oom_adj: -16
	I0912 21:30:30.038483   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:30.539293   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.039133   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.538947   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.038423   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.539286   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.039390   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.127054   13842 kubeadm.go:1113] duration metric: took 3.736657835s to wait for elevateKubeSystemPrivileges
	I0912 21:30:33.127093   13842 kubeadm.go:394] duration metric: took 14.420693245s to StartCluster
	I0912 21:30:33.127114   13842 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127242   13842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:30:33.127605   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127771   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:30:33.127785   13842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:30:33.127850   13842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:30:33.127956   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.127969   13842 addons.go:69] Setting ingress-dns=true in profile "addons-694635"
	I0912 21:30:33.127972   13842 addons.go:69] Setting cloud-spanner=true in profile "addons-694635"
	I0912 21:30:33.127991   13842 addons.go:69] Setting registry=true in profile "addons-694635"
	I0912 21:30:33.127957   13842 addons.go:69] Setting yakd=true in profile "addons-694635"
	I0912 21:30:33.128001   13842 addons.go:234] Setting addon cloud-spanner=true in "addons-694635"
	I0912 21:30:33.128012   13842 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-694635"
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon registry=true in "addons-694635"
	I0912 21:30:33.128027   13842 addons.go:69] Setting metrics-server=true in profile "addons-694635"
	I0912 21:30:33.128032   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-694635"
	I0912 21:30:33.128043   13842 addons.go:234] Setting addon metrics-server=true in "addons-694635"
	I0912 21:30:33.128047   13842 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-694635"
	I0912 21:30:33.128049   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128060   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128080   13842 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:33.128102   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128386   13842 addons.go:69] Setting volcano=true in profile "addons-694635"
	I0912 21:30:33.128420   13842 addons.go:234] Setting addon volcano=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128450   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128451   13842 addons.go:69] Setting inspektor-gadget=true in profile "addons-694635"
	I0912 21:30:33.128460   13842 addons.go:69] Setting volumesnapshots=true in profile "addons-694635"
	I0912 21:30:33.128476   13842 addons.go:234] Setting addon inspektor-gadget=true in "addons-694635"
	I0912 21:30:33.128484   13842 addons.go:69] Setting default-storageclass=true in profile "addons-694635"
	I0912 21:30:33.128494   13842 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-694635"
	I0912 21:30:33.128503   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128515   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-694635"
	I0912 21:30:33.128542   13842 addons.go:234] Setting addon volumesnapshots=true in "addons-694635"
	I0912 21:30:33.128571   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128475   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128659   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128809   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128816   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128833   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon yakd=true in "addons-694635"
	I0912 21:30:33.128846   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128867   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128882   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128911   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128927   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128945   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128043   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128516   13842 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129006   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128004   13842 addons.go:69] Setting storage-provisioner=true in profile "addons-694635"
	I0912 21:30:33.129193   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129197   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129236   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.127996   13842 addons.go:234] Setting addon ingress-dns=true in "addons-694635"
	I0912 21:30:33.129298   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129535   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129586   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128481   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129193   13842 addons.go:234] Setting addon storage-provisioner=true in "addons-694635"
	I0912 21:30:33.128535   13842 addons.go:69] Setting gcp-auth=true in profile "addons-694635"
	I0912 21:30:33.129722   13842 mustload.go:65] Loading cluster: addons-694635
	I0912 21:30:33.129728   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.127957   13842 addons.go:69] Setting ingress=true in profile "addons-694635"
	I0912 21:30:33.129751   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129763   13842 addons.go:234] Setting addon ingress=true in "addons-694635"
	I0912 21:30:33.128448   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129798   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129304   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129900   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.129910   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128543   13842 addons.go:69] Setting helm-tiller=true in profile "addons-694635"
	I0912 21:30:33.129963   13842 addons.go:234] Setting addon helm-tiller=true in "addons-694635"
	I0912 21:30:33.130031   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130100   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130255   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130287   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130407   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130440   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130535   13842 out.go:177] * Verifying Kubernetes components...
	I0912 21:30:33.130801   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.141968   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:33.150069   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0912 21:30:33.150316   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0912 21:30:33.150409   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0912 21:30:33.150573   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0912 21:30:33.150789   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150884   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150941   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43391
	I0912 21:30:33.151478   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.151657   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151789   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151800   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151919   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151928   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151977   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152027   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152074   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152112   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152642   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.152664   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.152720   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152818   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152827   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.152948   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152958   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.153389   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0912 21:30:33.153693   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.153966   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0912 21:30:33.157880   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.157948   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.158145   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158164   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158243   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158260   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158318   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158329   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158341   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158598   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158814   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.158844   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.158917   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158980   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159098   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.159117   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.159471   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.159522   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159600   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.160143   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.160171   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.160628   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.163174   13842 addons.go:234] Setting addon default-storageclass=true in "addons-694635"
	I0912 21:30:33.163237   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.163679   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.163717   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.164514   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.164547   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.186987   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0912 21:30:33.187677   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.188318   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.188338   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.188699   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.188886   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.189751   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0912 21:30:33.190453   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.191030   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.191046   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.192477   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0912 21:30:33.192988   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.193332   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0912 21:30:33.193964   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.194014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.194400   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.194427   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.194717   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194732   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.194867   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194878   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.195204   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195262   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195317   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.195365   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.196144   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.196183   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.196926   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.197418   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:30:33.198461   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:33.198474   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:30:33.198481   13842 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:30:33.198514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.199826   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0912 21:30:33.200469   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.200723   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:30:33.201099   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.201116   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.201423   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.201605   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.202354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203063   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.203235   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.203301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.203325   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203365   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:30:33.203436   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.203701   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.204148   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.204529   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.204565   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.205663   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:30:33.206838   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:30:33.208115   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:30:33.209260   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:30:33.210410   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:30:33.211388   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:33.211406   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:30:33.211431   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.213932   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0912 21:30:33.214509   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.215055   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.215079   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.215339   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.215471   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.215750   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.215812   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.215831   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.216070   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.216227   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.216391   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.216522   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.218588   13842 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-694635"
	I0912 21:30:33.218632   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.218984   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.219020   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.219207   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0912 21:30:33.219636   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.220056   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.220076   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.220402   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.220894   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.220934   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.221132   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0912 21:30:33.222065   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.222569   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.222585   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.222956   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.223007   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0912 21:30:33.223665   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.223702   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.226781   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0912 21:30:33.227303   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.227791   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.227810   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.228143   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.228324   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.230191   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.232445   13842 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:30:33.233487   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.233677   13842 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.233695   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:30:33.233715   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.236503   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.236518   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.236794   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0912 21:30:33.237127   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237492   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.237525   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237561   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.237731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.238172   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.238205   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.238515   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.238691   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.238755   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0912 21:30:33.239058   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.239118   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239258   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0912 21:30:33.239484   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0912 21:30:33.239603   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239735   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.239754   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.239756   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240141   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240160   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240222   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240292   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240315   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240706   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240791   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240952   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240954   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240967   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.241651   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.241936   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.242439   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0912 21:30:33.242626   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.243111   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.243235   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.244232   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.244741   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0912 21:30:33.244824   13842 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:30:33.245133   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245135   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.245276   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.245293   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.245549   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246062   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.246078   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.246574   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246602   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.247038   13842 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:30:33.247107   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:33.247118   13842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:30:33.247136   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.247367   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.247571   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0912 21:30:33.248105   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.248613   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:33.248629   13842 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:30:33.248646   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.248652   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.248667   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.249005   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.249581   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249722   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0912 21:30:33.249729   13842 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:30:33.249843   13842 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:30:33.249905   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.249947   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249984   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.250358   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.250824   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.250839   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.250973   13842 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:33.250992   13842 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:30:33.251013   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.251167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.251211   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251334   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.251681   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.251704   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251870   13842 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.251886   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:30:33.251904   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.252556   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.252912   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.253090   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.253335   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.253982   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.254189   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254334   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.254706   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.254745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.254755   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:33.254764   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254772   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.255212   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.255240   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.255249   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	W0912 21:30:33.255329   13842 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0912 21:30:33.256835   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257248   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257768   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257790   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257818   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257834   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257862   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257877   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.258042   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258312   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258360   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258364   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258463   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258613   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.258645   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258693   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258799   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258878   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.259401   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.261562   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0912 21:30:33.261628   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0912 21:30:33.261740   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0912 21:30:33.262014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262042   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262120   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262468   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262486   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262561   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262586   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262968   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262988   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.262990   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.263127   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263521   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263555   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263697   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263722   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263750   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263947   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.268234   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.268300   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0912 21:30:33.268599   13842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.268615   13842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:30:33.268635   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.268729   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0912 21:30:33.268912   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.269386   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.269408   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.270003   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.270070   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.270285   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.270670   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.270690   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.271058   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.271281   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.272388   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.272895   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.272921   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.273067   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.273237   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.273355   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.273458   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.273740   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.274080   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.275548   13842 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:30:33.275560   13842 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:30:33.276670   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.276700   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:30:33.276722   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.276675   13842 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:30:33.278040   13842 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:33.278062   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:30:33.278081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.281119   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.281589   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.281860   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.282081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282129   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.282266   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.282598   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.281510   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282680   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282710   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282742   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282767   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282784   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282963   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.284659   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0912 21:30:33.285034   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.285737   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.285767   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.286142   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.286339   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.287706   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0912 21:30:33.287900   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0912 21:30:33.288046   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.288069   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288168   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288576   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288598   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288743   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288759   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288856   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289114   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289153   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.289708   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.290010   13842 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:30:33.290749   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.291355   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.292706   13842 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:30:33.292711   13842 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:30:33.292715   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293836   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:33.293894   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:30:33.293913   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.293963   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:30:33.293979   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.296001   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:30:33.297175   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:33.297189   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:30:33.297204   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.297379   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.297549   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298027   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298042   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298070   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298082   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298305   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298341   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298504   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298639   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298712   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298778   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299074   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299967   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.300311   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.300338   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.301763   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.301987   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.302125   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.302244   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.306121   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0912 21:30:33.306524   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.306887   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.306904   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.307338   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.307506   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	W0912 21:30:33.308193   13842 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.308214   13842 retry.go:31] will retry after 340.22316ms: ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.309320   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.311143   13842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:30:33.312425   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.312441   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:30:33.312456   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.315180   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315769   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.315798   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315962   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.316179   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.316377   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.316513   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.639453   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:33.639482   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:30:33.657578   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:33.657597   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:30:33.680952   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:33.680978   13842 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:30:33.733177   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:30:33.733181   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:33.743215   13842 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:33.743241   13842 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:30:33.762069   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:33.762098   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:30:33.782751   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.785088   13842 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:33.785111   13842 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:30:33.792263   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.836509   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.868944   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:33.868973   13842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:30:33.904688   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.911394   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:33.911420   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:30:33.913031   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.922465   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:33.922491   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:30:33.927414   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:33.927438   13842 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:30:33.941076   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.942361   13842 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:33.942383   13842 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:30:33.962765   13842 node_ready.go:35] waiting up to 6m0s for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965689   13842 node_ready.go:49] node "addons-694635" has status "Ready":"True"
	I0912 21:30:33.965712   13842 node_ready.go:38] duration metric: took 2.919714ms for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965723   13842 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:33.971996   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:33.978042   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:33.978064   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:30:34.048949   13842 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.048968   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:30:34.093153   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.093183   13842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:30:34.128832   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:34.128859   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:30:34.163298   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:34.163328   13842 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:30:34.173254   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:34.173281   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:30:34.177529   13842 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:34.177559   13842 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:30:34.215996   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:34.285198   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:34.287981   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.309345   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.315086   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:34.315113   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:30:34.354466   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.354493   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:30:34.374522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:34.374556   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:30:34.393891   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:34.393921   13842 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:30:34.502563   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:34.502588   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:30:34.584726   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:34.584760   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:30:34.607498   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.645255   13842 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:34.645280   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:30:34.718335   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:34.718361   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:30:34.783759   13842 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:34.783787   13842 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:30:34.940148   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:35.030796   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:35.030824   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:30:35.144522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.144548   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:30:35.191648   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:35.191688   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:30:35.435800   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.467895   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:35.467918   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:30:35.684867   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:35.684898   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:30:35.859788   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:35.859822   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:30:35.932925   13842 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.199703683s)
	I0912 21:30:35.932952   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.150160783s)
	I0912 21:30:35.932956   13842 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:30:35.933005   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933018   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933032   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.140722926s)
	I0912 21:30:35.933074   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933089   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933413   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933461   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933469   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933483   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933492   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933500   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933505   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933515   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933523   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933530   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933759   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.934193   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.934238   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.934260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.956608   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.956638   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.956922   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.956968   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.956988   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.992917   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:36.227480   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:36.438013   13842 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-694635" context rescaled to 1 replicas
	I0912 21:30:37.249809   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.413260898s)
	I0912 21:30:37.249867   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.249888   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250165   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250185   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:37.250200   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.250209   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250454   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250474   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.021956   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:38.703385   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.798660977s)
	I0912 21:30:38.703445   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703459   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.703792   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.703811   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.703811   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:38.703820   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703827   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.704152   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.704197   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.704207   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023100   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.110032197s)
	I0912 21:30:39.023152   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023164   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023211   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.082101447s)
	I0912 21:30:39.023263   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.807232005s)
	I0912 21:30:39.023297   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023313   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023273   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023386   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023407   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023426   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023454   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023474   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023498   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023509   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023525   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023536   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023545   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023642   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023673   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023685   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023689   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023693   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023701   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023736   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023747   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025326   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.025330   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025342   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025481   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025492   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.139049   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.139382   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.139403   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139432   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:40.261224   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:30:40.261266   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.264217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264583   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.264613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264808   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.265022   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.265208   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.265354   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:40.483338   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:40.539106   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:30:40.689076   13842 addons.go:234] Setting addon gcp-auth=true in "addons-694635"
	I0912 21:30:40.689138   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:40.689446   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.689471   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.705390   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0912 21:30:40.705838   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.706274   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.706296   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.706632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.707109   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.707133   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.722882   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0912 21:30:40.723304   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.723787   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.723806   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.724121   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.724311   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:40.725649   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:40.725862   13842 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:30:40.725882   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.728400   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.728878   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.728898   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.729103   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.729271   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.729386   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.729528   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:41.942865   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.657623757s)
	I0912 21:30:41.942920   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942926   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.654910047s)
	I0912 21:30:41.942947   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942963   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.942980   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.633591683s)
	I0912 21:30:41.942931   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943030   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.335497924s)
	I0912 21:30:41.943040   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943062   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943074   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943136   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.002958423s)
	W0912 21:30:41.943188   13842 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943217   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.50737724s)
	I0912 21:30:41.943330   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943349   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943386   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943399   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943401   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943408   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943418   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943425   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943429   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943445   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943457   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943467   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943470   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943477   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943479   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943485   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943494   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943496   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943505   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943512   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943221   13842 retry.go:31] will retry after 361.478049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943575   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943601   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943608   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943616   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943622   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.945219   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945224   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945234   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945235   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945249   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945251   13842 addons.go:475] Verifying addon registry=true in "addons-694635"
	I0912 21:30:41.945434   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945436   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945446   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945457   13842 addons.go:475] Verifying addon ingress=true in "addons-694635"
	I0912 21:30:41.945655   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945674   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945683   13842 addons.go:475] Verifying addon metrics-server=true in "addons-694635"
	I0912 21:30:41.945756   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945793   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945806   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.946676   13842 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-694635 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:30:41.946688   13842 out.go:177] * Verifying registry addon...
	I0912 21:30:41.948418   13842 out.go:177] * Verifying ingress addon...
	I0912 21:30:41.949076   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:30:41.950349   13842 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:30:41.954743   13842 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:30:41.954774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:41.960928   13842 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:30:41.960949   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.305973   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:42.467232   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.477555   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.764449   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:42.797806   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.570260767s)
	I0912 21:30:42.797869   13842 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.071984177s)
	I0912 21:30:42.797869   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.797989   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798300   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798313   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798323   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.798331   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798617   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798639   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798649   13842 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:42.799295   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:42.800145   13842 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:30:42.801601   13842 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:30:42.802781   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:30:42.803047   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:42.803064   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:30:42.817988   13842 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:30:42.818009   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.900221   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:42.900257   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:30:42.960615   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.960989   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.009576   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.009605   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:30:43.147089   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.320966   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.453136   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.454373   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.808102   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.953362   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.958697   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.162942   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.856921696s)
	I0912 21:30:44.163000   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163016   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163309   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.163366   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.163381   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163328   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.163393   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163848   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.164957   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.164983   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.378590   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.427113   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.279974028s)
	I0912 21:30:44.427173   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427193   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427495   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427544   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.427559   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427568   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427499   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427772   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427798   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427814   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.429338   13842 addons.go:475] Verifying addon gcp-auth=true in "addons-694635"
	I0912 21:30:44.431064   13842 out.go:177] * Verifying gcp-auth addon...
	I0912 21:30:44.432961   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:30:44.468784   13842 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:30:44.468806   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.469261   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.469425   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.809517   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.936881   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.953105   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.954618   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.978466   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:45.312534   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.436603   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.454472   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.458065   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:45.478156   13842 pod_ready.go:98] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.67 HostIPs:[{IP:192.168.39.
67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0009cbb30}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478190   13842 pod_ready.go:82] duration metric: took 11.506167543s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	E0912 21:30:45.478205   13842 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.67 HostIPs:[{IP:192.168.39.67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0009cbb30}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478217   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486926   13842 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.486961   13842 pod_ready.go:82] duration metric: took 8.733099ms for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486974   13842 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493880   13842 pod_ready.go:93] pod "etcd-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.493917   13842 pod_ready.go:82] duration metric: took 6.934283ms for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493933   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500231   13842 pod_ready.go:93] pod "kube-apiserver-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.500262   13842 pod_ready.go:82] duration metric: took 6.319725ms for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500276   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508921   13842 pod_ready.go:93] pod "kube-controller-manager-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.508952   13842 pod_ready.go:82] duration metric: took 8.661364ms for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508966   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.807845   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.875520   13842 pod_ready.go:93] pod "kube-proxy-4hcfx" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.875543   13842 pod_ready.go:82] duration metric: took 366.569724ms for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.875552   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.936184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.953664   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.955104   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.275644   13842 pod_ready.go:93] pod "kube-scheduler-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:46.275666   13842 pod_ready.go:82] duration metric: took 400.107483ms for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:46.275674   13842 pod_ready.go:39] duration metric: took 12.309938834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:46.275689   13842 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:30:46.275751   13842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:46.301756   13842 api_server.go:72] duration metric: took 13.173948128s to wait for apiserver process to appear ...
	I0912 21:30:46.301775   13842 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:30:46.301792   13842 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0912 21:30:46.305735   13842 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0912 21:30:46.306725   13842 api_server.go:141] control plane version: v1.31.1
	I0912 21:30:46.306743   13842 api_server.go:131] duration metric: took 4.962021ms to wait for apiserver health ...
	I0912 21:30:46.306750   13842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:30:46.309045   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.436328   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.454711   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.455101   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.480691   13842 system_pods.go:59] 18 kube-system pods found
	I0912 21:30:46.480719   13842 system_pods.go:61] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.480728   13842 system_pods.go:61] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.480735   13842 system_pods.go:61] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.480742   13842 system_pods.go:61] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.480746   13842 system_pods.go:61] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.480750   13842 system_pods.go:61] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.480754   13842 system_pods.go:61] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.480761   13842 system_pods.go:61] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.480765   13842 system_pods.go:61] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.480770   13842 system_pods.go:61] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.480775   13842 system_pods.go:61] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.480784   13842 system_pods.go:61] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.480794   13842 system_pods.go:61] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.480800   13842 system_pods.go:61] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.480808   13842 system_pods.go:61] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480814   13842 system_pods.go:61] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480818   13842 system_pods.go:61] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.480823   13842 system_pods.go:61] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.480830   13842 system_pods.go:74] duration metric: took 174.075986ms to wait for pod list to return data ...
	I0912 21:30:46.480840   13842 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:30:46.676516   13842 default_sa.go:45] found service account: "default"
	I0912 21:30:46.676544   13842 default_sa.go:55] duration metric: took 195.698229ms for default service account to be created ...
	I0912 21:30:46.676555   13842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:30:46.808312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.882566   13842 system_pods.go:86] 18 kube-system pods found
	I0912 21:30:46.882593   13842 system_pods.go:89] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.882601   13842 system_pods.go:89] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.882607   13842 system_pods.go:89] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.882615   13842 system_pods.go:89] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.882619   13842 system_pods.go:89] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.882624   13842 system_pods.go:89] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.882627   13842 system_pods.go:89] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.882632   13842 system_pods.go:89] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.882638   13842 system_pods.go:89] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.882642   13842 system_pods.go:89] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.882647   13842 system_pods.go:89] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.882653   13842 system_pods.go:89] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.882659   13842 system_pods.go:89] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.882665   13842 system_pods.go:89] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.882670   13842 system_pods.go:89] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882678   13842 system_pods.go:89] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882683   13842 system_pods.go:89] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.882691   13842 system_pods.go:89] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.882697   13842 system_pods.go:126] duration metric: took 206.137533ms to wait for k8s-apps to be running ...
	I0912 21:30:46.882703   13842 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:30:46.882743   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:30:46.925829   13842 system_svc.go:56] duration metric: took 43.114101ms WaitForService to wait for kubelet
	I0912 21:30:46.925861   13842 kubeadm.go:582] duration metric: took 13.798055946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:46.925881   13842 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:30:46.936949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.954044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.954652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.077031   13842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:30:47.077069   13842 node_conditions.go:123] node cpu capacity is 2
	I0912 21:30:47.077086   13842 node_conditions.go:105] duration metric: took 151.197367ms to run NodePressure ...
	I0912 21:30:47.077102   13842 start.go:241] waiting for startup goroutines ...
	I0912 21:30:47.306659   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.436922   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.454133   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:47.455284   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.807878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.936979   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.954401   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.955301   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.308026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.436963   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:48.456522   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.457189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:48.807641   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.086497   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.086504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.087121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.307899   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.436969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.452710   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.455147   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.808000   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.940753   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.971990   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.972275   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.306737   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.436059   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.452909   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.455902   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:50.807091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.935993   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.953464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.954524   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.308257   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.436479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.452352   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.453795   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.807739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.936798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.953151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.955301   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.436742   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.452578   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.454290   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.808168   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.936339   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.953730   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.307714   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.438307   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.454049   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.454999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.809141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.937475   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.953075   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.956110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.309453   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.437498   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.452997   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.454232   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.808290   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.937121   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.953554   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.954933   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.308403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.436189   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.453910   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.455288   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.808688   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.936880   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.953026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.954088   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.307678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.438816   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.453756   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.454145   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.806670   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.938510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.953471   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.956690   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.307668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.436695   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.456044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.456392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.808216   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.936313   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.953978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.307798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.437125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.454751   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.457211   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.807968   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.937010   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.953141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.959276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.308291   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.436266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.453642   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:59.455378   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.808750   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.937681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.955468   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.955848   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.308635   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.436913   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.453130   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.454282   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:00.807146   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.936739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.953015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.306985   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.436195   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.453123   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.454341   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.807013   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.936537   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.952370   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.954597   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.307157   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.436510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.452446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.454782   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.807320   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.983700   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.983759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.984366   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.307411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.436395   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.453271   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.454447   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.807454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.936777   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.952668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.955100   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.307745   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.436831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.452778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.455238   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.807569   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.936849   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.953099   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.955331   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.307263   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.436369   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.455274   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.455523   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.807911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.936890   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.953011   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.954859   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.308088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.436094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:06.453015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:06.454185   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.807536   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.937265   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.294221   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.294459   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.394402   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.436598   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.452707   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.454367   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.807204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.936209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.953204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.307069   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.452844   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.456371   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.807416   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.936870   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.952721   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.954434   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.436768   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.452696   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.454244   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.806900   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.936202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.952947   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.954077   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.310715   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.436442   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.453775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.454308   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.807926   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.936446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.952829   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.954777   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.307638   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.437017   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.455266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.455579   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.808062   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.936788   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.953110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.955323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.309018   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.437559   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.452853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.455591   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.807821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.936153   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.952946   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.955049   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.308125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.436685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.453405   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.454409   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.808343   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.936831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.953008   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.955615   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.307410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.439286   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.460392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.461660   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.808029   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.937360   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.953551   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.955229   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.308853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.802413   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.802546   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.802929   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.806810   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.935781   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.953409   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.954622   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.307574   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.436906   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.454204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:16.454314   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.807151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.936285   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.954876   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.954961   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.308273   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.436690   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.452851   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.454581   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:17.808378   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.937233   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.953506   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.954633   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.307978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.438381   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.452394   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.454983   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.808450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.937057   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.954873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.954917   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.307860   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.443523   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.451685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.454121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.808677   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.942749   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.954209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.955400   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.308312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.436764   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.453650   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.455934   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.809185   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.937034   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.953356   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.954469   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.306918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.436565   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.452318   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.454075   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.807969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.936459   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.952911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.954462   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.308342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:22.436293   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:22.454954   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:22.455186   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.807592   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.028341   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.028457   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.028520   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.307479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.436556   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.453994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.454062   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.807759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.936678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.953231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.954392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.307358   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.436892   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.453479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.455733   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.807681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.936504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.952491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.955015   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.307494   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:25.437838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:25.454660   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:25.455196   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.806376   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.169088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.169141   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.169576   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.308047   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.438798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.454085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.454874   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.808511   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.936179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.953217   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.955020   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.307867   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.436967   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.453064   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:27.454221   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.808241   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.936433   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.954010   13842 kapi.go:107] duration metric: took 46.004930815s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:31:27.954819   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.308179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.436505   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.455109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.807480   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.936668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.954245   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.306669   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.436989   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.455085   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.817843   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.937454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.956102   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.308652   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.437396   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.454614   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.807604   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.936840   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.954423   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.308447   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.437404   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.454276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.807324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.936952   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.954363   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.306415   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.437242   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.454652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.807329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.936869   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.954340   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.436873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.454653   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.810231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.937220   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.954601   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.307392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.958058   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.958295   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.958411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.961259   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.961741   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.307464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.437024   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.455092   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.808111   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.937085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.955030   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.307832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.438403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.457831   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.808182   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.939647   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.955818   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.307778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.436832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.454110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.807859   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.936514   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.955016   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.307838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.436456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.454686   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.808567   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.941164   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.956269   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.307122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:39.437203   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:39.454703   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.078488   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.079334   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.079654   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.307212   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.436878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.538252   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.807485   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.938491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.955935   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.308214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.436295   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.454533   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.807705   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.943420   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.954960   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.308025   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.439095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.454338   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.807582   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.937122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.955099   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.406903   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.436443   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.455666   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.807519   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.937682   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.954323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.306738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.436834   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.454320   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.815595   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.938314   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.954595   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.308036   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.455327   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.807991   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.962606   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.967707   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.436949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.455549   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.807608   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.937589   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.958969   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.307738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.436911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.454432   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.811530   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.936953   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.955680   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.308202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.437342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.456109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.815410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.936379   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.955189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.307918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.436235   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.454487   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.812324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.936703   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.954166   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.308053   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.455802   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.808329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.936571   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.955407   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.307733   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.438936   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.474999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.807267   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.937095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.955402   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.307348   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.436276   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.455029   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.807657   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.937207   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.954953   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.307507   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.437088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.454370   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.807469   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.937040   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.954745   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.307579   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.437891   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.757207   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.809668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.937739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.958776   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:55.307785   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.436060   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:55.454674   13842 kapi.go:107] duration metric: took 1m13.504323658s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:31:55.807214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.936450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.308210   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.528172   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.807634   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.936775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.307995   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.436434   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.817862   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.936850   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.307245   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.436887   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.808853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.936774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.307234   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.808299   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.935885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.307456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:00.437156   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.964683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.965821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.312456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.436422   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:01.808885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.937181   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:02.318607   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:02.437876   13842 kapi.go:107] duration metric: took 1m18.004909184s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:32:02.439347   13842 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-694635 cluster.
	I0912 21:32:02.440699   13842 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:32:02.441821   13842 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:32:02.807994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.308094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.808683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.307312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.808877   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.308455   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.808430   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.316091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.808681   13842 kapi.go:107] duration metric: took 1m24.005897654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:32:06.810775   13842 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0912 21:32:06.812317   13842 addons.go:510] duration metric: took 1m33.684465733s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0912 21:32:06.812359   13842 start.go:246] waiting for cluster config update ...
	I0912 21:32:06.812380   13842 start.go:255] writing updated cluster config ...
	I0912 21:32:06.812657   13842 ssh_runner.go:195] Run: rm -f paused
	I0912 21:32:06.863917   13842 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:32:06.865782   13842 out.go:177] * Done! kubectl is now configured to use "addons-694635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.539933774Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42ac9b8f-a30b-46bb-aba4-1cff02973e4d name=/runtime.v1.RuntimeService/Version
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.541159264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=870f095b-a532-4764-9555-6e9650371f6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.542257743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177282542227791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571627,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=870f095b-a532-4764-9555-6e9650371f6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.542779465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f95113b-c3ac-4c31-92d4-15782df3a619 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.542834612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f95113b-c3ac-4c31-92d4-15782df3a619 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.543200724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f85e7a04ed804c63db5416291ded74c5b1ff730eb8b38fdc5afcd02bf3962c0,PodSandboxId:e6676a53e29a74f32152b4f21a44d69224de564af7ba6fb37c675c5cf34d1ea3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726177229275376212,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b77bec0b-6be8-4e74-abdd-41f010f87dee,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernete
s.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d,PodSandboxId:41b5f10d072dc8ce1a63ca1b56ea205df13005006efdcf8800e9a0763f839353,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726176714899342256,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lg6xg,io.kubernetes.pod.namespace
: ingress-nginx,io.kubernetes.pod.uid: f65472b6-e81f-4c58-ab81-fccf64b4d231,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4d
e17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry
.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1,PodSandboxId:b263479a313d827
1d5f494493f10fbf66f9c4e0ae953d2c467999a339212b5a4,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726176686682766184,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-ckz5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b8f58-7fa3-4666-be84-9fcc8574a1f8,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:a1bc0d0072155af81018aae93f1bf5c625978cf2c1b79bd1e154639e5c2ed7ce,PodSandboxId:035b934dd4d98ea13610d7abe155351566b81a594df2aae6270c98116336712b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726176673158855173,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7cpwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b56665b-2953-4567-aa4d-49eb198ea1a0,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6,PodSandboxId:faf64bbeb7b913f752b7c78321a6066991f8bc23c3f0e09516213ac94f9c5b6e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726176649611045902,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649b3c-8428-4122-bf69-ab76864aaa7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\
"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.na
me: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons
-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f95113b-c3ac-4c31-92d4-15782df3a619 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.585266500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98a73f05-f2fd-4675-8aeb-cc28ac689408 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.585345707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98a73f05-f2fd-4675-8aeb-cc28ac689408 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.586591539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b741f73-2d96-4758-9bd0-b6cb0896d8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.588675024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177282588641193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571627,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b741f73-2d96-4758-9bd0-b6cb0896d8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.589636987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=735d83b9-0669-43e5-9841-43cfc4b15859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.589705978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=735d83b9-0669-43e5-9841-43cfc4b15859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.590065946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f85e7a04ed804c63db5416291ded74c5b1ff730eb8b38fdc5afcd02bf3962c0,PodSandboxId:e6676a53e29a74f32152b4f21a44d69224de564af7ba6fb37c675c5cf34d1ea3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726177229275376212,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b77bec0b-6be8-4e74-abdd-41f010f87dee,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernete
s.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d,PodSandboxId:41b5f10d072dc8ce1a63ca1b56ea205df13005006efdcf8800e9a0763f839353,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726176714899342256,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lg6xg,io.kubernetes.pod.namespace
: ingress-nginx,io.kubernetes.pod.uid: f65472b6-e81f-4c58-ab81-fccf64b4d231,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4d
e17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry
.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1,PodSandboxId:b263479a313d827
1d5f494493f10fbf66f9c4e0ae953d2c467999a339212b5a4,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726176686682766184,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-ckz5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b8f58-7fa3-4666-be84-9fcc8574a1f8,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:a1bc0d0072155af81018aae93f1bf5c625978cf2c1b79bd1e154639e5c2ed7ce,PodSandboxId:035b934dd4d98ea13610d7abe155351566b81a594df2aae6270c98116336712b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726176673158855173,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7cpwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b56665b-2953-4567-aa4d-49eb198ea1a0,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6,PodSandboxId:faf64bbeb7b913f752b7c78321a6066991f8bc23c3f0e09516213ac94f9c5b6e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726176649611045902,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649b3c-8428-4122-bf69-ab76864aaa7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\
"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.na
me: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons
-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=735d83b9-0669-43e5-9841-43cfc4b15859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.623981561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc353543-30b9-4bef-9c59-d51f9c4c670d name=/runtime.v1.RuntimeService/Version
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.624066033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc353543-30b9-4bef-9c59-d51f9c4c670d name=/runtime.v1.RuntimeService/Version
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.625409093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddf751e0-e01d-4dac-8a40-997c1007a1fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.626564127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177282626533707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571627,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddf751e0-e01d-4dac-8a40-997c1007a1fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.627104428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ec8c6be-e99b-4162-b319-b03dc16cddc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.627184809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ec8c6be-e99b-4162-b319-b03dc16cddc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.628064674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f85e7a04ed804c63db5416291ded74c5b1ff730eb8b38fdc5afcd02bf3962c0,PodSandboxId:e6676a53e29a74f32152b4f21a44d69224de564af7ba6fb37c675c5cf34d1ea3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726177229275376212,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b77bec0b-6be8-4e74-abdd-41f010f87dee,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernete
s.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d,PodSandboxId:41b5f10d072dc8ce1a63ca1b56ea205df13005006efdcf8800e9a0763f839353,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726176714899342256,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lg6xg,io.kubernetes.pod.namespace
: ingress-nginx,io.kubernetes.pod.uid: f65472b6-e81f-4c58-ab81-fccf64b4d231,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4d
e17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha
256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry
.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1,PodSandboxId:b263479a313d827
1d5f494493f10fbf66f9c4e0ae953d2c467999a339212b5a4,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726176686682766184,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-ckz5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317b8f58-7fa3-4666-be84-9fcc8574a1f8,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:a1bc0d0072155af81018aae93f1bf5c625978cf2c1b79bd1e154639e5c2ed7ce,PodSandboxId:035b934dd4d98ea13610d7abe155351566b81a594df2aae6270c98116336712b,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726176673158855173,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7cpwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b56665b-2953-4567-aa4d-49eb198ea1a0,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6,PodSandboxId:faf64bbeb7b913f752b7c78321a6066991f8bc23c3f0e09516213ac94f9c5b6e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726176649611045902,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649b3c-8428-4122-bf69-ab76864aaa7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\
"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kuber
netes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.na
me: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons
-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ec8c6be-e99b-4162-b319-b03dc16cddc3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.644408629Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ee0b2eb-6221-4684-baa9-389da945ad63 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.644896857Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&PodSandboxMetadata{Name:nginx,Uid:6d172e45-acae-4863-b4f1-7cf6c870a3d8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726177273590138313,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:41:13.277142058Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c8bd65fe7f95c9eea3e10a3b8142381d5c930123fc40e69f7f953e460050b90,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176727435777597,Labels:map[string]string{integration-test
: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:32:07.126624728Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-px7q4,Uid:ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176708678812921,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:30:44.402264516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:
41b5f10d072dc8ce1a63ca1b56ea205df13005006efdcf8800e9a0763f839353,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-bc57996ff-lg6xg,Uid:f65472b6-e81f-4c58-ab81-fccf64b4d231,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176706020642448,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lg6xg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f65472b6-e81f-4c58-ab81-fccf64b4d231,pod-template-hash: bc57996ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:30:41.797149347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-v4b7g,Uid:4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1726176639621927161,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:30:39.309059867Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176639373449569,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annot
ations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T21:30:38.718367469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:faf64bbeb7b913f752b7c78321a6066991f8bc23c3f0e09516213ac94f9c5b6e,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:22649b3c-8428-4122-bf69-ab76864a
aa7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176637933990524,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649b3c-8428-4122-bf69-ab76864aaa7e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"
minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-09-12T21:30:37.271679989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&PodSandboxMetadata{Name:kube-proxy-4hcfx,Uid:17176328-abc9-4540-ac4c-c63083724812,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176634214424945,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:30:33.606899973Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c
958e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpsn9,Uid:cb2ce549-2d5c-45ec-a46d-562d4acd82ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176634059901834,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:30:33.750866050Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-694635,Uid:8d6a101dce97ee820fc22e8980fa1bd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176623285125994,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.67:8443,kubernetes.io/config.hash: 8d6a101dce97ee820fc22e8980fa1bd2,kubernetes.io/config.seen: 2024-09-12T21:30:22.218344929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-694635,Uid:b876c14c875d4b53e5c61f3bdb6b61f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176623284033069,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b876c14c875d4b53e5c61f3bdb6b61f2,kubernetes.io/config.seen: 2024-09-12T21:
30:22.218351291Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&PodSandboxMetadata{Name:etcd-addons-694635,Uid:ca0f4581a8ddd13059907f5e64c9ddcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176623281908256,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: ca0f4581a8ddd13059907f5e64c9ddcf,kubernetes.io/config.seen: 2024-09-12T21:30:22.218340705Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-694635,Uid:9eeb62b2ef7f8ac332344239844
358b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726176623280165428,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9eeb62b2ef7f8ac332344239844358b7,kubernetes.io/config.seen: 2024-09-12T21:30:22.218346231Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5ee0b2eb-6221-4684-baa9-389da945ad63 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.645649121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e64f15e9-3bf9-4a1c-af92-2fd54d35a292 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.645720790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e64f15e9-3bf9-4a1c-af92-2fd54d35a292 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:41:22 addons-694635 crio[662]: time="2024-09-12 21:41:22.645960652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d,PodSandboxId:41b5f10d072dc8ce1a63ca1b56ea205df13005006efdcf8800e9a0763f839353,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726176714899342256,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-lg6xg,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: f65472b6-e81f-4c58-ab81-fccf64b4d231,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb
009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6,PodSandboxId:faf64bbeb7b913f752b7c78321a6066991f8bc23c3f0e09516213ac94f9c5b6e,Metadata:&ContainerMetadata{Name:minik
ube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726176649611045902,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22649b3c-8428-4122-bf69-ab76864aaa7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0
c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105
ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e64f15e9-3bf9-4a1c-af92-2fd54d35a292 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15f49cc7f3e63       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              5 seconds ago       Running             nginx                     0                   9d3e688e943f8       nginx
	3f85e7a04ed80       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             53 seconds ago      Exited              helper-pod                0                   e6676a53e29a7       helper-pod-delete-pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5
	224662c30f376       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   e71b5d7408e65       gcp-auth-89d5ffd79-px7q4
	08b47558fe95c       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   41b5f10d072dc       ingress-nginx-controller-bc57996ff-lg6xg
	cbfb52a51b015       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   0fc6f924b3914       ingress-nginx-admission-patch-75vhq
	5bd29448ea314       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   d4d9cc832e450       ingress-nginx-admission-create-gf4cr
	01c0d8468e1a5       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        9 minutes ago       Running             metrics-server            0                   f1b6fca0a1b4a       metrics-server-84c5f94fbc-v4b7g
	d8e7a9620862f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4              9 minutes ago       Exited              registry-proxy            0                   b263479a313d8       registry-proxy-ckz5n
	6de976d4d55c0       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   faf64bbeb7b91       kube-ingress-dns-minikube
	c63491974a86d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   bb6d26e812401       storage-provisioner
	1a9fbfbdc2579       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   52798c65c361b       coredns-7c65d6cfc9-rpsn9
	aa4b1b8007598       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   00dce38c65e40       kube-proxy-4hcfx
	daff578fb9bc4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago      Running             kube-scheduler            0                   af67c23417313       kube-scheduler-addons-694635
	04006273204a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago      Running             kube-apiserver            0                   8f5fcc20744c5       kube-apiserver-addons-694635
	3ad45dbfb61b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   e1566071cac6e       etcd-addons-694635
	5c2e331dbfead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago      Running             kube-controller-manager   0                   8ab56f691eeea       kube-controller-manager-addons-694635
	
	
	==> coredns [1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b] <==
	[INFO] 127.0.0.1:55335 - 14088 "HINFO IN 1593280896951240425.6479746786649468559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009751103s
	[INFO] 10.244.0.8:55681 - 3740 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000376198s
	[INFO] 10.244.0.8:55681 - 64158 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228403s
	[INFO] 10.244.0.8:37781 - 47777 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000252945s
	[INFO] 10.244.0.8:37781 - 7076 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147556s
	[INFO] 10.244.0.8:41819 - 26826 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000226016s
	[INFO] 10.244.0.8:41819 - 4808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010299s
	[INFO] 10.244.0.8:36322 - 25419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092438s
	[INFO] 10.244.0.8:36322 - 47689 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000194058s
	[INFO] 10.244.0.8:52027 - 25674 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158473s
	[INFO] 10.244.0.8:52027 - 28495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211396s
	[INFO] 10.244.0.8:60142 - 5226 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072599s
	[INFO] 10.244.0.8:60142 - 8039 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122511s
	[INFO] 10.244.0.8:50355 - 29794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050766s
	[INFO] 10.244.0.8:50355 - 16480 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152532s
	[INFO] 10.244.0.8:38422 - 32454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054761s
	[INFO] 10.244.0.8:38422 - 36548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127267s
	[INFO] 10.244.0.22:60865 - 4263 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000466894s
	[INFO] 10.244.0.22:39371 - 54519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000098861s
	[INFO] 10.244.0.22:41806 - 53233 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138737s
	[INFO] 10.244.0.22:36774 - 22315 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063899s
	[INFO] 10.244.0.22:57836 - 41268 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128874s
	[INFO] 10.244.0.22:60541 - 59176 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161626s
	[INFO] 10.244.0.22:53240 - 37260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004441249s
	[INFO] 10.244.0.22:51419 - 44769 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004780269s
	
	
	==> describe nodes <==
	Name:               addons-694635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-694635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-694635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-694635
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-694635
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:41:00 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:41:00 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:41:00 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:41:00 +0000   Thu, 12 Sep 2024 21:30:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    addons-694635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 13b099cf91f8442286dd9014ad34a5eb
	  System UUID:                13b099cf-91f8-4422-86dd-9014ad34a5eb
	  Boot ID:                    e094f473-e531-4253-a8aa-4f2a067e9156
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  gcp-auth                    gcp-auth-89d5ffd79-px7q4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lg6xg    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-rpsn9                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-694635                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-694635                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-694635       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4hcfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-694635                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-v4b7g             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-694635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-694635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-694635 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-694635 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-694635 event: Registered Node addons-694635 in Controller
	
	
	==> dmesg <==
	[  +5.305611] kauditd_printk_skb: 45 callbacks suppressed
	[Sep12 21:31] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.489065] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.929989] kauditd_printk_skb: 27 callbacks suppressed
	[ +10.095844] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.073125] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.683105] kauditd_printk_skb: 81 callbacks suppressed
	[  +7.372236] kauditd_printk_skb: 32 callbacks suppressed
	[Sep12 21:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.856647] kauditd_printk_skb: 16 callbacks suppressed
	[ +29.701828] kauditd_printk_skb: 40 callbacks suppressed
	[Sep12 21:33] kauditd_printk_skb: 30 callbacks suppressed
	[Sep12 21:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.238101] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.551734] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.393117] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.485586] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.071553] kauditd_printk_skb: 25 callbacks suppressed
	[ +10.586398] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.540652] kauditd_printk_skb: 43 callbacks suppressed
	[Sep12 21:41] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.241626] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.193519] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863] <==
	{"level":"info","ts":"2024-09-12T21:32:27.442536Z","caller":"traceutil/trace.go:171","msg":"trace[1053230552] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"376.793189ms","start":"2024-09-12T21:32:27.065736Z","end":"2024-09-12T21:32:27.442529Z","steps":["trace[1053230552] 'process raft request'  (duration: 376.46254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:32:27.442634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T21:32:27.065721Z","time spent":"376.837334ms","remote":"127.0.0.1:46902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1237 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-12T21:32:27.442747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.721986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:32:27.442779Z","caller":"traceutil/trace.go:171","msg":"trace[1254642931] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1240; }","duration":"263.753575ms","start":"2024-09-12T21:32:27.179019Z","end":"2024-09-12T21:32:27.442773Z","steps":["trace[1254642931] 'agreement among raft nodes before linearized reading'  (duration: 263.705568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:32:27.442948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.756935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-09-12T21:32:27.442984Z","caller":"traceutil/trace.go:171","msg":"trace[1578547455] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1240; }","duration":"262.791577ms","start":"2024-09-12T21:32:27.180186Z","end":"2024-09-12T21:32:27.442977Z","steps":["trace[1578547455] 'agreement among raft nodes before linearized reading'  (duration: 262.70651ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746598Z","caller":"traceutil/trace.go:171","msg":"trace[1981957924] linearizableReadLoop","detail":"{readStateIndex:2127; appliedIndex:2126; }","duration":"133.477931ms","start":"2024-09-12T21:40:19.613083Z","end":"2024-09-12T21:40:19.746561Z","steps":["trace[1981957924] 'read index received'  (duration: 133.318567ms)","trace[1981957924] 'applied index is now lower than readState.Index'  (duration: 158.878µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:19.746825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.6822ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:19.746858Z","caller":"traceutil/trace.go:171","msg":"trace[1975095780] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1989; }","duration":"133.772244ms","start":"2024-09-12T21:40:19.613077Z","end":"2024-09-12T21:40:19.746850Z","steps":["trace[1975095780] 'agreement among raft nodes before linearized reading'  (duration: 133.667003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746680Z","caller":"traceutil/trace.go:171","msg":"trace[784585044] transaction","detail":"{read_only:false; response_revision:1989; number_of_response:1; }","duration":"282.702863ms","start":"2024-09-12T21:40:19.463956Z","end":"2024-09-12T21:40:19.746659Z","steps":["trace[784585044] 'process raft request'  (duration: 282.48487ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:24.366865Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-12T21:40:24.408830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"41.110259ms","hash":3946649684,"current-db-size-bytes":6709248,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-12T21:40:24.408900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3946649684,"revision":1527,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T21:40:40.024996Z","caller":"traceutil/trace.go:171","msg":"trace[2045705986] transaction","detail":"{read_only:false; response_revision:2179; number_of_response:1; }","duration":"188.170203ms","start":"2024-09-12T21:40:39.836812Z","end":"2024-09-12T21:40:40.024982Z","steps":["trace[2045705986] 'process raft request'  (duration: 187.576243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.025569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.896897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.025770Z","caller":"traceutil/trace.go:171","msg":"trace[1651034224] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:2179; }","duration":"185.132257ms","start":"2024-09-12T21:40:39.840570Z","end":"2024-09-12T21:40:40.025702Z","steps":["trace[1651034224] 'agreement among raft nodes before linearized reading'  (duration: 184.872808ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.027031Z","caller":"traceutil/trace.go:171","msg":"trace[737774189] linearizableReadLoop","detail":"{readStateIndex:2324; appliedIndex:2323; }","duration":"184.07988ms","start":"2024-09-12T21:40:39.840574Z","end":"2024-09-12T21:40:40.024654Z","steps":["trace[737774189] 'read index received'  (duration: 183.713847ms)","trace[737774189] 'applied index is now lower than readState.Index'  (duration: 365.525µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:40.027339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.934654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-12T21:40:40.027410Z","caller":"traceutil/trace.go:171","msg":"trace[100333331] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2179; }","duration":"162.010795ms","start":"2024-09-12T21:40:39.865389Z","end":"2024-09-12T21:40:40.027400Z","steps":["trace[100333331] 'agreement among raft nodes before linearized reading'  (duration: 161.762163ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.220357Z","caller":"traceutil/trace.go:171","msg":"trace[1115025117] linearizableReadLoop","detail":"{readStateIndex:2325; appliedIndex:2324; }","duration":"186.564755ms","start":"2024-09-12T21:40:40.033761Z","end":"2024-09-12T21:40:40.220326Z","steps":["trace[1115025117] 'read index received'  (duration: 186.518061ms)","trace[1115025117] 'applied index is now lower than readState.Index'  (duration: 45.997µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:40:40.220626Z","caller":"traceutil/trace.go:171","msg":"trace[1874224401] transaction","detail":"{read_only:false; response_revision:2180; number_of_response:1; }","duration":"187.429481ms","start":"2024-09-12T21:40:40.033184Z","end":"2024-09-12T21:40:40.220614Z","steps":["trace[1874224401] 'process raft request'  (duration: 186.678055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.086416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220822Z","caller":"traceutil/trace.go:171","msg":"trace[838300705] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2180; }","duration":"187.131549ms","start":"2024-09-12T21:40:40.033683Z","end":"2024-09-12T21:40:40.220815Z","steps":["trace[838300705] 'agreement among raft nodes before linearized reading'  (duration: 187.072562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.744825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220957Z","caller":"traceutil/trace.go:171","msg":"trace[524765721] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:2180; }","duration":"186.778424ms","start":"2024-09-12T21:40:40.034173Z","end":"2024-09-12T21:40:40.220952Z","steps":["trace[524765721] 'agreement among raft nodes before linearized reading'  (duration: 186.735141ms)"],"step_count":1}
	
	
	==> gcp-auth [224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765] <==
	2024/09/12 21:32:07 Ready to write response ...
	2024/09/12 21:32:07 Ready to marshal response ...
	2024/09/12 21:32:07 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:13 Ready to marshal response ...
	2024/09/12 21:40:13 Ready to write response ...
	2024/09/12 21:40:14 Ready to marshal response ...
	2024/09/12 21:40:14 Ready to write response ...
	2024/09/12 21:40:20 Ready to marshal response ...
	2024/09/12 21:40:20 Ready to write response ...
	2024/09/12 21:40:28 Ready to marshal response ...
	2024/09/12 21:40:28 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:36 Ready to marshal response ...
	2024/09/12 21:40:36 Ready to write response ...
	2024/09/12 21:41:13 Ready to marshal response ...
	2024/09/12 21:41:13 Ready to write response ...
	
	
	==> kernel <==
	 21:41:23 up 11 min,  0 users,  load average: 1.10, 0.66, 0.42
	Linux addons-694635 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600] <==
	 > logger="UnhandledError"
	E0912 21:32:39.672883       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.685803       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.712873       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	I0912 21:32:39.805428       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0912 21:40:26.205439       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 21:40:33.330977       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.67.92"}
	E0912 21:40:44.623623       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0912 21:40:56.039633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.039692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.072862       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.072917       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.085872       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.085946       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.110100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.110148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.134562       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.134998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:40:57.111095       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:40:57.135378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 21:40:57.232817       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0912 21:41:09.586605       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:41:10.631785       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:41:13.128356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:41:13.316851       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.65.172"}
	
	
	==> kube-controller-manager [5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba] <==
	I0912 21:41:03.421663       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0912 21:41:03.421710       1 shared_informer.go:320] Caches are synced for garbage collector
	W0912 21:41:03.829263       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:03.829296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:06.093645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:06.093717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:06.984303       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:06.984388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:08.905283       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0912 21:41:10.633119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:11.518283       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:11.518400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:11.904109       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:11.904271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:14.567893       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:14.568365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:15.630281       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:15.630405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:16.325900       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:16.325944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:17.807450       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0912 21:41:19.401715       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:19.401767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:19.723865       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0912 21:41:21.526445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.903µs"
	
	
	==> kube-proxy [aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:30:36.071774       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:30:36.082467       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0912 21:30:36.082639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:30:36.149367       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:30:36.149399       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:30:36.149432       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:30:36.161798       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:30:36.164947       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:30:36.164965       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:30:36.177240       1 config.go:199] "Starting service config controller"
	I0912 21:30:36.177256       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:30:36.177281       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:30:36.177291       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:30:36.180184       1 config.go:328] "Starting node config controller"
	I0912 21:30:36.180198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:30:36.277929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:30:36.278089       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:30:36.286430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546] <==
	W0912 21:30:25.943462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:30:25.943544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:25.943641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:30:25.943723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.867246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:30:26.867357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.882410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:30:26.882590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.937816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:30:26.937964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.988234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:26.988387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.028755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:30:27.028982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.065104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:30:27.065402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.081373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.081599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.089933       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:30:27.090023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:30:27.106816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.106970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.187917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.188172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 21:30:29.715653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:41:13 addons-694635 kubelet[1201]: I0912 21:41:13.365215    1201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhzw\" (UniqueName: \"kubernetes.io/projected/6d172e45-acae-4863-b4f1-7cf6c870a3d8-kube-api-access-mmhzw\") pod \"nginx\" (UID: \"6d172e45-acae-4863-b4f1-7cf6c870a3d8\") " pod="default/nginx"
	Sep 12 21:41:13 addons-694635 kubelet[1201]: I0912 21:41:13.365305    1201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6d172e45-acae-4863-b4f1-7cf6c870a3d8-gcp-creds\") pod \"nginx\" (UID: \"6d172e45-acae-4863-b4f1-7cf6c870a3d8\") " pod="default/nginx"
	Sep 12 21:41:19 addons-694635 kubelet[1201]: E0912 21:41:19.252748    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177279252068205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571627,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:41:19 addons-694635 kubelet[1201]: E0912 21:41:19.252792    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177279252068205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571627,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:41:19 addons-694635 kubelet[1201]: I0912 21:41:19.642455    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-7cpwk" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 21:41:20 addons-694635 kubelet[1201]: E0912 21:41:20.644067    1201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="105c9c8e-baac-40b4-a7aa-58f86b27de83"
	Sep 12 21:41:20 addons-694635 kubelet[1201]: I0912 21:41:20.657379    1201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=4.046346099 podStartE2EDuration="7.657361141s" podCreationTimestamp="2024-09-12 21:41:13 +0000 UTC" firstStartedPulling="2024-09-12 21:41:13.801063286 +0000 UTC m=+645.287270119" lastFinishedPulling="2024-09-12 21:41:17.412078328 +0000 UTC m=+648.898285161" observedRunningTime="2024-09-12 21:41:17.683553065 +0000 UTC m=+649.169759916" watchObservedRunningTime="2024-09-12 21:41:20.657361141 +0000 UTC m=+652.143567993"
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.119962    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/105c9c8e-baac-40b4-a7aa-58f86b27de83-gcp-creds\") pod \"105c9c8e-baac-40b4-a7aa-58f86b27de83\" (UID: \"105c9c8e-baac-40b4-a7aa-58f86b27de83\") "
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.120012    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rckd5\" (UniqueName: \"kubernetes.io/projected/105c9c8e-baac-40b4-a7aa-58f86b27de83-kube-api-access-rckd5\") pod \"105c9c8e-baac-40b4-a7aa-58f86b27de83\" (UID: \"105c9c8e-baac-40b4-a7aa-58f86b27de83\") "
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.120236    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/105c9c8e-baac-40b4-a7aa-58f86b27de83-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "105c9c8e-baac-40b4-a7aa-58f86b27de83" (UID: "105c9c8e-baac-40b4-a7aa-58f86b27de83"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.123403    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/105c9c8e-baac-40b4-a7aa-58f86b27de83-kube-api-access-rckd5" (OuterVolumeSpecName: "kube-api-access-rckd5") pod "105c9c8e-baac-40b4-a7aa-58f86b27de83" (UID: "105c9c8e-baac-40b4-a7aa-58f86b27de83"). InnerVolumeSpecName "kube-api-access-rckd5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.220295    1201 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/105c9c8e-baac-40b4-a7aa-58f86b27de83-gcp-creds\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.220373    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rckd5\" (UniqueName: \"kubernetes.io/projected/105c9c8e-baac-40b4-a7aa-58f86b27de83-kube-api-access-rckd5\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.932594    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmfw\" (UniqueName: \"kubernetes.io/projected/4b56665b-2953-4567-aa4d-49eb198ea1a0-kube-api-access-xfmfw\") pod \"4b56665b-2953-4567-aa4d-49eb198ea1a0\" (UID: \"4b56665b-2953-4567-aa4d-49eb198ea1a0\") "
	Sep 12 21:41:21 addons-694635 kubelet[1201]: I0912 21:41:21.934404    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b56665b-2953-4567-aa4d-49eb198ea1a0-kube-api-access-xfmfw" (OuterVolumeSpecName: "kube-api-access-xfmfw") pod "4b56665b-2953-4567-aa4d-49eb198ea1a0" (UID: "4b56665b-2953-4567-aa4d-49eb198ea1a0"). InnerVolumeSpecName "kube-api-access-xfmfw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.033272    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxwwj\" (UniqueName: \"kubernetes.io/projected/317b8f58-7fa3-4666-be84-9fcc8574a1f8-kube-api-access-mxwwj\") pod \"317b8f58-7fa3-4666-be84-9fcc8574a1f8\" (UID: \"317b8f58-7fa3-4666-be84-9fcc8574a1f8\") "
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.033412    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xfmfw\" (UniqueName: \"kubernetes.io/projected/4b56665b-2953-4567-aa4d-49eb198ea1a0-kube-api-access-xfmfw\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.039450    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317b8f58-7fa3-4666-be84-9fcc8574a1f8-kube-api-access-mxwwj" (OuterVolumeSpecName: "kube-api-access-mxwwj") pod "317b8f58-7fa3-4666-be84-9fcc8574a1f8" (UID: "317b8f58-7fa3-4666-be84-9fcc8574a1f8"). InnerVolumeSpecName "kube-api-access-mxwwj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.134007    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mxwwj\" (UniqueName: \"kubernetes.io/projected/317b8f58-7fa3-4666-be84-9fcc8574a1f8-kube-api-access-mxwwj\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.646768    1201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="105c9c8e-baac-40b4-a7aa-58f86b27de83" path="/var/lib/kubelet/pods/105c9c8e-baac-40b4-a7aa-58f86b27de83/volumes"
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.707899    1201 scope.go:117] "RemoveContainer" containerID="a1bc0d0072155af81018aae93f1bf5c625978cf2c1b79bd1e154639e5c2ed7ce"
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.743632    1201 scope.go:117] "RemoveContainer" containerID="d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1"
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.764516    1201 scope.go:117] "RemoveContainer" containerID="d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1"
	Sep 12 21:41:22 addons-694635 kubelet[1201]: E0912 21:41:22.765879    1201 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1\": container with ID starting with d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1 not found: ID does not exist" containerID="d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1"
	Sep 12 21:41:22 addons-694635 kubelet[1201]: I0912 21:41:22.765918    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1"} err="failed to get container status \"d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1\": rpc error: code = NotFound desc = could not find container \"d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1\": container with ID starting with d8e7a9620862f66da335f7169bc5debc30bd88b39a3bfcb0b132ef4bdf427fe1 not found: ID does not exist"
	
	
	==> storage-provisioner [c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38] <==
	I0912 21:30:40.634278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:30:40.654230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:30:40.654289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:30:40.672312       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:30:40.672455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	I0912 21:30:40.672557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad54721b-5319-42a0-af50-593f2d28e853", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2 became leader
	I0912 21:30:40.772629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-694635 -n addons-694635
helpers_test.go:261: (dbg) Run:  kubectl --context addons-694635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-gf4cr ingress-nginx-admission-patch-75vhq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-694635 describe pod busybox ingress-nginx-admission-create-gf4cr ingress-nginx-admission-patch-75vhq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-694635 describe pod busybox ingress-nginx-admission-create-gf4cr ingress-nginx-admission-patch-75vhq: exit status 1 (69.528447ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-694635/192.168.39.67
	Start Time:       Thu, 12 Sep 2024 21:32:07 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mw2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c9mw2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-694635
	  Normal   Pulling    7m41s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m26s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gf4cr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-75vhq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-694635 describe pod busybox ingress-nginx-admission-create-gf4cr ingress-nginx-admission-patch-75vhq: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.06s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (150.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-694635 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-694635 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-694635 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6d172e45-acae-4863-b4f1-7cf6c870a3d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6d172e45-acae-4863-b4f1-7cf6c870a3d8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006181368s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-694635 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.633582004s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-694635 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.67
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable ingress-dns --alsologtostderr -v=1: (1.100507765s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable ingress --alsologtostderr -v=1: (7.731821532s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-694635 -n addons-694635
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 logs -n 25: (1.224410372s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-976166                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-618378                                                                     | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-976166                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | binary-mirror-318498                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39999                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-318498                                                                     | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-694635 --wait=true                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-694635 ssh cat                                                                       | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | /opt/local-path-provisioner/pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| ip      | addons-694635 ip                                                                            | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-694635 ssh curl -s                                                                   | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-694635 ip                                                                            | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:47.475866   13842 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:47.475993   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476005   13842 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:47.476012   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476186   13842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:29:47.476836   13842 out.go:352] Setting JSON to false
	I0912 21:29:47.477752   13842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":729,"bootTime":1726175858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:47.477818   13842 start.go:139] virtualization: kvm guest
	I0912 21:29:47.479869   13842 out.go:177] * [addons-694635] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:47.481136   13842 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:29:47.481139   13842 notify.go:220] Checking for updates...
	I0912 21:29:47.483542   13842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:47.484839   13842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:29:47.486133   13842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.487896   13842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:29:47.489241   13842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:29:47.490764   13842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:47.523002   13842 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:29:47.524034   13842 start.go:297] selected driver: kvm2
	I0912 21:29:47.524046   13842 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:29:47.524060   13842 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:29:47.524980   13842 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.525102   13842 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:29:47.540324   13842 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:29:47.540407   13842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:47.540684   13842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:29:47.540767   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:29:47.540781   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:29:47.540792   13842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:47.540869   13842 start.go:340] cluster config:
	{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:47.540994   13842 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.542738   13842 out.go:177] * Starting "addons-694635" primary control-plane node in "addons-694635" cluster
	I0912 21:29:47.543940   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:29:47.543977   13842 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:47.543985   13842 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:47.544089   13842 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:29:47.544102   13842 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:29:47.544526   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:29:47.544557   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json: {Name:mk33fa1e209cbe67cd91a1b792a3ca9ac0ed48ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:47.544694   13842 start.go:360] acquireMachinesLock for addons-694635: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:29:47.544742   13842 start.go:364] duration metric: took 34.718µs to acquireMachinesLock for "addons-694635"
	I0912 21:29:47.544765   13842 start.go:93] Provisioning new machine with config: &{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:29:47.544840   13842 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:29:47.546289   13842 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 21:29:47.546444   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:29:47.546482   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:29:47.560635   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0912 21:29:47.561053   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:29:47.561645   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:29:47.561668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:29:47.562020   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:29:47.562207   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:29:47.562346   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:29:47.562487   13842 start.go:159] libmachine.API.Create for "addons-694635" (driver="kvm2")
	I0912 21:29:47.562506   13842 client.go:168] LocalClient.Create starting
	I0912 21:29:47.562537   13842 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:29:47.644946   13842 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:29:47.782363   13842 main.go:141] libmachine: Running pre-create checks...
	I0912 21:29:47.782383   13842 main.go:141] libmachine: (addons-694635) Calling .PreCreateCheck
	I0912 21:29:47.782856   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:29:47.783275   13842 main.go:141] libmachine: Creating machine...
	I0912 21:29:47.783290   13842 main.go:141] libmachine: (addons-694635) Calling .Create
	I0912 21:29:47.783442   13842 main.go:141] libmachine: (addons-694635) Creating KVM machine...
	I0912 21:29:47.784608   13842 main.go:141] libmachine: (addons-694635) DBG | found existing default KVM network
	I0912 21:29:47.785304   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.785155   13864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0912 21:29:47.785337   13842 main.go:141] libmachine: (addons-694635) DBG | created network xml: 
	I0912 21:29:47.785348   13842 main.go:141] libmachine: (addons-694635) DBG | <network>
	I0912 21:29:47.785361   13842 main.go:141] libmachine: (addons-694635) DBG |   <name>mk-addons-694635</name>
	I0912 21:29:47.785392   13842 main.go:141] libmachine: (addons-694635) DBG |   <dns enable='no'/>
	I0912 21:29:47.785413   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785428   13842 main.go:141] libmachine: (addons-694635) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:29:47.785441   13842 main.go:141] libmachine: (addons-694635) DBG |     <dhcp>
	I0912 21:29:47.785456   13842 main.go:141] libmachine: (addons-694635) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:29:47.785466   13842 main.go:141] libmachine: (addons-694635) DBG |     </dhcp>
	I0912 21:29:47.785476   13842 main.go:141] libmachine: (addons-694635) DBG |   </ip>
	I0912 21:29:47.785490   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785501   13842 main.go:141] libmachine: (addons-694635) DBG | </network>
	I0912 21:29:47.785509   13842 main.go:141] libmachine: (addons-694635) DBG | 
	I0912 21:29:47.790883   13842 main.go:141] libmachine: (addons-694635) DBG | trying to create private KVM network mk-addons-694635 192.168.39.0/24...
	I0912 21:29:47.856566   13842 main.go:141] libmachine: (addons-694635) DBG | private KVM network mk-addons-694635 192.168.39.0/24 created
	I0912 21:29:47.856589   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.856546   13864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.856604   13842 main.go:141] libmachine: (addons-694635) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:47.856615   13842 main.go:141] libmachine: (addons-694635) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:29:47.856703   13842 main.go:141] libmachine: (addons-694635) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:29:48.103210   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.103069   13864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa...
	I0912 21:29:48.158267   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158115   13864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk...
	I0912 21:29:48.158303   13842 main.go:141] libmachine: (addons-694635) DBG | Writing magic tar header
	I0912 21:29:48.158321   13842 main.go:141] libmachine: (addons-694635) DBG | Writing SSH key tar header
	I0912 21:29:48.158334   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158221   13864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:48.158344   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635
	I0912 21:29:48.158353   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:29:48.158362   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 (perms=drwx------)
	I0912 21:29:48.158376   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:29:48.158397   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:29:48.158411   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:48.158423   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:29:48.158433   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:29:48.158450   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:29:48.158464   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:29:48.158476   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:29:48.158486   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:29:48.158502   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:48.158514   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home
	I0912 21:29:48.158532   13842 main.go:141] libmachine: (addons-694635) DBG | Skipping /home - not owner
	I0912 21:29:48.159530   13842 main.go:141] libmachine: (addons-694635) define libvirt domain using xml: 
	I0912 21:29:48.159561   13842 main.go:141] libmachine: (addons-694635) <domain type='kvm'>
	I0912 21:29:48.159569   13842 main.go:141] libmachine: (addons-694635)   <name>addons-694635</name>
	I0912 21:29:48.159576   13842 main.go:141] libmachine: (addons-694635)   <memory unit='MiB'>4000</memory>
	I0912 21:29:48.159582   13842 main.go:141] libmachine: (addons-694635)   <vcpu>2</vcpu>
	I0912 21:29:48.159593   13842 main.go:141] libmachine: (addons-694635)   <features>
	I0912 21:29:48.159601   13842 main.go:141] libmachine: (addons-694635)     <acpi/>
	I0912 21:29:48.159611   13842 main.go:141] libmachine: (addons-694635)     <apic/>
	I0912 21:29:48.159621   13842 main.go:141] libmachine: (addons-694635)     <pae/>
	I0912 21:29:48.159629   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.159634   13842 main.go:141] libmachine: (addons-694635)   </features>
	I0912 21:29:48.159641   13842 main.go:141] libmachine: (addons-694635)   <cpu mode='host-passthrough'>
	I0912 21:29:48.159688   13842 main.go:141] libmachine: (addons-694635)   
	I0912 21:29:48.159713   13842 main.go:141] libmachine: (addons-694635)   </cpu>
	I0912 21:29:48.159737   13842 main.go:141] libmachine: (addons-694635)   <os>
	I0912 21:29:48.159750   13842 main.go:141] libmachine: (addons-694635)     <type>hvm</type>
	I0912 21:29:48.159770   13842 main.go:141] libmachine: (addons-694635)     <boot dev='cdrom'/>
	I0912 21:29:48.159783   13842 main.go:141] libmachine: (addons-694635)     <boot dev='hd'/>
	I0912 21:29:48.159802   13842 main.go:141] libmachine: (addons-694635)     <bootmenu enable='no'/>
	I0912 21:29:48.159818   13842 main.go:141] libmachine: (addons-694635)   </os>
	I0912 21:29:48.159831   13842 main.go:141] libmachine: (addons-694635)   <devices>
	I0912 21:29:48.159842   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='cdrom'>
	I0912 21:29:48.159866   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/boot2docker.iso'/>
	I0912 21:29:48.159877   13842 main.go:141] libmachine: (addons-694635)       <target dev='hdc' bus='scsi'/>
	I0912 21:29:48.159885   13842 main.go:141] libmachine: (addons-694635)       <readonly/>
	I0912 21:29:48.159896   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159907   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='disk'>
	I0912 21:29:48.159916   13842 main.go:141] libmachine: (addons-694635)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:29:48.159932   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk'/>
	I0912 21:29:48.159943   13842 main.go:141] libmachine: (addons-694635)       <target dev='hda' bus='virtio'/>
	I0912 21:29:48.159953   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159969   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.159982   13842 main.go:141] libmachine: (addons-694635)       <source network='mk-addons-694635'/>
	I0912 21:29:48.159992   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160001   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160011   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.160022   13842 main.go:141] libmachine: (addons-694635)       <source network='default'/>
	I0912 21:29:48.160032   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160043   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160051   13842 main.go:141] libmachine: (addons-694635)     <serial type='pty'>
	I0912 21:29:48.160066   13842 main.go:141] libmachine: (addons-694635)       <target port='0'/>
	I0912 21:29:48.160077   13842 main.go:141] libmachine: (addons-694635)     </serial>
	I0912 21:29:48.160089   13842 main.go:141] libmachine: (addons-694635)     <console type='pty'>
	I0912 21:29:48.160108   13842 main.go:141] libmachine: (addons-694635)       <target type='serial' port='0'/>
	I0912 21:29:48.160121   13842 main.go:141] libmachine: (addons-694635)     </console>
	I0912 21:29:48.160132   13842 main.go:141] libmachine: (addons-694635)     <rng model='virtio'>
	I0912 21:29:48.160143   13842 main.go:141] libmachine: (addons-694635)       <backend model='random'>/dev/random</backend>
	I0912 21:29:48.160151   13842 main.go:141] libmachine: (addons-694635)     </rng>
	I0912 21:29:48.160157   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160168   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160176   13842 main.go:141] libmachine: (addons-694635)   </devices>
	I0912 21:29:48.160185   13842 main.go:141] libmachine: (addons-694635) </domain>
	I0912 21:29:48.160195   13842 main.go:141] libmachine: (addons-694635) 
	I0912 21:29:48.165998   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:32:e5:de in network default
	I0912 21:29:48.166596   13842 main.go:141] libmachine: (addons-694635) Ensuring networks are active...
	I0912 21:29:48.166616   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:48.167233   13842 main.go:141] libmachine: (addons-694635) Ensuring network default is active
	I0912 21:29:48.167509   13842 main.go:141] libmachine: (addons-694635) Ensuring network mk-addons-694635 is active
	I0912 21:29:48.167964   13842 main.go:141] libmachine: (addons-694635) Getting domain xml...
	I0912 21:29:48.168724   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:49.564332   13842 main.go:141] libmachine: (addons-694635) Waiting to get IP...
	I0912 21:29:49.565210   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.565680   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.565753   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.565686   13864 retry.go:31] will retry after 259.088458ms: waiting for machine to come up
	I0912 21:29:49.826131   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.826631   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.826660   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.826579   13864 retry.go:31] will retry after 330.128851ms: waiting for machine to come up
	I0912 21:29:50.158148   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.158574   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.158644   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.158552   13864 retry.go:31] will retry after 438.081447ms: waiting for machine to come up
	I0912 21:29:50.598323   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.598829   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.598897   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.598822   13864 retry.go:31] will retry after 407.106138ms: waiting for machine to come up
	I0912 21:29:51.007259   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.007718   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.007758   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.007668   13864 retry.go:31] will retry after 621.06803ms: waiting for machine to come up
	I0912 21:29:51.630684   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.631143   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.631165   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.631112   13864 retry.go:31] will retry after 606.154083ms: waiting for machine to come up
	I0912 21:29:52.238827   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:52.239319   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:52.239351   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:52.239251   13864 retry.go:31] will retry after 1.053486982s: waiting for machine to come up
	I0912 21:29:53.294067   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:53.294469   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:53.294496   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:53.294420   13864 retry.go:31] will retry after 1.050950177s: waiting for machine to come up
	I0912 21:29:54.347197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:54.347603   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:54.347631   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:54.347539   13864 retry.go:31] will retry after 1.24941056s: waiting for machine to come up
	I0912 21:29:55.598907   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:55.599382   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:55.599413   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:55.599328   13864 retry.go:31] will retry after 2.237205326s: waiting for machine to come up
	I0912 21:29:57.838937   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:57.839483   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:57.839506   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:57.839455   13864 retry.go:31] will retry after 2.152344085s: waiting for machine to come up
	I0912 21:29:59.994815   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:59.995133   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:59.995155   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:59.995091   13864 retry.go:31] will retry after 2.540765126s: waiting for machine to come up
	I0912 21:30:02.536979   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:02.537427   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:02.537453   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:02.537360   13864 retry.go:31] will retry after 3.772056123s: waiting for machine to come up
	I0912 21:30:06.313642   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:06.314016   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:06.314033   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:06.313980   13864 retry.go:31] will retry after 4.542886768s: waiting for machine to come up
	I0912 21:30:10.861222   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861712   13842 main.go:141] libmachine: (addons-694635) Found IP for machine: 192.168.39.67
	I0912 21:30:10.861742   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has current primary IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861751   13842 main.go:141] libmachine: (addons-694635) Reserving static IP address...
	I0912 21:30:10.862048   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find host DHCP lease matching {name: "addons-694635", mac: "52:54:00:6b:43:77", ip: "192.168.39.67"} in network mk-addons-694635
	I0912 21:30:10.932572   13842 main.go:141] libmachine: (addons-694635) Reserved static IP address: 192.168.39.67
	I0912 21:30:10.932602   13842 main.go:141] libmachine: (addons-694635) Waiting for SSH to be available...
	I0912 21:30:10.932612   13842 main.go:141] libmachine: (addons-694635) DBG | Getting to WaitForSSH function...
	I0912 21:30:10.935290   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935838   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:10.935873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935964   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH client type: external
	I0912 21:30:10.935991   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa (-rw-------)
	I0912 21:30:10.936035   13842 main.go:141] libmachine: (addons-694635) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:30:10.936049   13842 main.go:141] libmachine: (addons-694635) DBG | About to run SSH command:
	I0912 21:30:10.936084   13842 main.go:141] libmachine: (addons-694635) DBG | exit 0
	I0912 21:30:11.069676   13842 main.go:141] libmachine: (addons-694635) DBG | SSH cmd err, output: <nil>: 
	I0912 21:30:11.070005   13842 main.go:141] libmachine: (addons-694635) KVM machine creation complete!
	I0912 21:30:11.070347   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:11.070852   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071054   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071193   13842 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:30:11.071208   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:11.072333   13842 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:30:11.072351   13842 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:30:11.072359   13842 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:30:11.072367   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.074613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.074932   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.074958   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.075073   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.075372   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075564   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.075904   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.076074   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.076085   13842 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:30:11.184974   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.184996   13842 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:30:11.185003   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.187718   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188031   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.188060   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188249   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.188446   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.188821   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.188967   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.188978   13842 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:30:11.297959   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:30:11.298022   13842 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:30:11.298032   13842 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:30:11.298042   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298318   13842 buildroot.go:166] provisioning hostname "addons-694635"
	I0912 21:30:11.298346   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.301198   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301546   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.301584   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301725   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.301923   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302369   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.302563   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.302737   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.302753   13842 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-694635 && echo "addons-694635" | sudo tee /etc/hostname
	I0912 21:30:11.426945   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694635
	
	I0912 21:30:11.426972   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.429942   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.430333   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430492   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.430677   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430844   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430998   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.431169   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.431330   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.431345   13842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-694635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-694635/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-694635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:30:11.549812   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.549842   13842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:30:11.549859   13842 buildroot.go:174] setting up certificates
	I0912 21:30:11.549868   13842 provision.go:84] configureAuth start
	I0912 21:30:11.549876   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.550203   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:11.552873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553191   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.553219   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553451   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.555633   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.555953   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.555985   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.556111   13842 provision.go:143] copyHostCerts
	I0912 21:30:11.556205   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:30:11.556362   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:30:11.556467   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:30:11.556548   13842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.addons-694635 san=[127.0.0.1 192.168.39.67 addons-694635 localhost minikube]
	I0912 21:30:11.859350   13842 provision.go:177] copyRemoteCerts
	I0912 21:30:11.859407   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:30:11.859439   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.862041   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862347   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.862395   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862533   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.862736   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.862883   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.863033   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:11.947343   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:30:11.971801   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:30:11.994695   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:30:12.016706   13842 provision.go:87] duration metric: took 466.828028ms to configureAuth
	I0912 21:30:12.016730   13842 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:30:12.016881   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:12.016945   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.019830   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020115   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.020139   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020268   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.020572   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020764   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020928   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.021133   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.021291   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.021305   13842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:30:12.242709   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:30:12.242730   13842 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:30:12.242738   13842 main.go:141] libmachine: (addons-694635) Calling .GetURL
	I0912 21:30:12.243884   13842 main.go:141] libmachine: (addons-694635) DBG | Using libvirt version 6000000
	I0912 21:30:12.245945   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246318   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.246350   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246533   13842 main.go:141] libmachine: Docker is up and running!
	I0912 21:30:12.246556   13842 main.go:141] libmachine: Reticulating splines...
	I0912 21:30:12.246564   13842 client.go:171] duration metric: took 24.684052058s to LocalClient.Create
	I0912 21:30:12.246588   13842 start.go:167] duration metric: took 24.684100435s to libmachine.API.Create "addons-694635"
	I0912 21:30:12.246601   13842 start.go:293] postStartSetup for "addons-694635" (driver="kvm2")
	I0912 21:30:12.246615   13842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:30:12.246639   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.246870   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:30:12.246905   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.249197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249498   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.249534   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.249879   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.250020   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.250162   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.335312   13842 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:30:12.339024   13842 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:30:12.339044   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:30:12.339112   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:30:12.339135   13842 start.go:296] duration metric: took 92.526012ms for postStartSetup
	I0912 21:30:12.339176   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:12.339703   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.342217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342565   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.342593   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342850   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:30:12.343012   13842 start.go:128] duration metric: took 24.798163033s to createHost
	I0912 21:30:12.343032   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.345464   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345807   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.345844   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345954   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.346123   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346247   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346385   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.346509   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.346686   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.346697   13842 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:30:12.457929   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726176612.428880125
	
	I0912 21:30:12.457953   13842 fix.go:216] guest clock: 1726176612.428880125
	I0912 21:30:12.457962   13842 fix.go:229] Guest: 2024-09-12 21:30:12.428880125 +0000 UTC Remote: 2024-09-12 21:30:12.34302243 +0000 UTC m=+24.902400367 (delta=85.857695ms)
	I0912 21:30:12.458006   13842 fix.go:200] guest clock delta is within tolerance: 85.857695ms
	I0912 21:30:12.458017   13842 start.go:83] releasing machines lock for "addons-694635", held for 24.913263111s
	I0912 21:30:12.458045   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.458281   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.460843   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461195   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.461214   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461345   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461780   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461924   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.462008   13842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:30:12.462054   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.462099   13842 ssh_runner.go:195] Run: cat /version.json
	I0912 21:30:12.462122   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.465318   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466089   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466118   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466258   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466291   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466484   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.466652   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.466686   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466711   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466774   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.466851   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466973   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.467142   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.467278   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.577120   13842 ssh_runner.go:195] Run: systemctl --version
	I0912 21:30:12.582974   13842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:30:12.745818   13842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:30:12.751421   13842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:30:12.751490   13842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:30:12.767475   13842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:30:12.767505   13842 start.go:495] detecting cgroup driver to use...
	I0912 21:30:12.767618   13842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:30:12.783679   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:30:12.797513   13842 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:30:12.797586   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:30:12.810747   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:30:12.824037   13842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:30:12.933703   13842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:30:13.069024   13842 docker.go:233] disabling docker service ...
	I0912 21:30:13.069119   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:30:13.082671   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:30:13.095050   13842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:30:13.233647   13842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:30:13.370107   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:30:13.383851   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:30:13.402794   13842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:30:13.402859   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.413117   13842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:30:13.413207   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.424050   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.434819   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.446105   13842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:30:13.457702   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.468902   13842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.486556   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.496994   13842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:30:13.506290   13842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:30:13.506366   13842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:30:13.518440   13842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:30:13.528117   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:13.648177   13842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:30:13.743367   13842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:30:13.743454   13842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:30:13.747977   13842 start.go:563] Will wait 60s for crictl version
	I0912 21:30:13.748061   13842 ssh_runner.go:195] Run: which crictl
	I0912 21:30:13.751466   13842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:30:13.795727   13842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:30:13.795864   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.823080   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.851860   13842 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:30:13.853473   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:13.855932   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856224   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:13.856252   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856515   13842 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:30:13.860421   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:13.872141   13842 kubeadm.go:883] updating cluster {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:30:13.872251   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:30:13.872300   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:13.904455   13842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:30:13.904513   13842 ssh_runner.go:195] Run: which lz4
	I0912 21:30:13.908020   13842 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:30:13.912184   13842 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:30:13.912211   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 21:30:15.114051   13842 crio.go:462] duration metric: took 1.206056393s to copy over tarball
	I0912 21:30:15.114132   13842 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:30:17.173858   13842 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059695045s)
	I0912 21:30:17.173886   13842 crio.go:469] duration metric: took 2.059804143s to extract the tarball
	I0912 21:30:17.173896   13842 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:30:17.209405   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:17.248658   13842 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 21:30:17.248678   13842 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:30:17.248685   13842 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.1 crio true true} ...
	I0912 21:30:17.248808   13842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-694635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:30:17.248877   13842 ssh_runner.go:195] Run: crio config
	I0912 21:30:17.290568   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:17.290590   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:17.290601   13842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:30:17.290621   13842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-694635 NodeName:addons-694635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:30:17.290786   13842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-694635"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:30:17.290849   13842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:30:17.300055   13842 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:30:17.300152   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:30:17.308986   13842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0912 21:30:17.325445   13842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:30:17.340762   13842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0912 21:30:17.356821   13842 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0912 21:30:17.360484   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:17.371412   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:17.492721   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:17.509813   13842 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635 for IP: 192.168.39.67
	I0912 21:30:17.509838   13842 certs.go:194] generating shared ca certs ...
	I0912 21:30:17.509857   13842 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.510001   13842 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:30:17.588276   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt ...
	I0912 21:30:17.588302   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt: {Name:mk816935852d33e60449d1c6a4d94ec7ab82ac30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588455   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key ...
	I0912 21:30:17.588466   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key: {Name:mk9dc9de662fbb5903c290d7926fa7232953ae33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588536   13842 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:30:17.693721   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt ...
	I0912 21:30:17.693751   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt: {Name:mk3263e222fdf8339a04083239eee50b749554b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693895   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key ...
	I0912 21:30:17.693905   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key: {Name:mk05f7726618d659b90a4327bb74fa26385a63bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693978   13842 certs.go:256] generating profile certs ...
	I0912 21:30:17.694024   13842 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key
	I0912 21:30:17.694037   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt with IP's: []
	I0912 21:30:18.018134   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt ...
	I0912 21:30:18.018169   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: {Name:mk10ce384e125f2b7ec307089833f9de35a73420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018339   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key ...
	I0912 21:30:18.018350   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key: {Name:mk451874420166276937e43f0b93cd8fbad875f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018420   13842 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54
	I0912 21:30:18.018438   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67]
	I0912 21:30:18.261062   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 ...
	I0912 21:30:18.261090   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54: {Name:mkd62b1b67056d42a6c142ee6c71845182d8908d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261238   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 ...
	I0912 21:30:18.261252   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54: {Name:mk7c82ddc89e4a1cf8c648222b96704d6a1d1dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261330   13842 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt
	I0912 21:30:18.261402   13842 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key
	I0912 21:30:18.261446   13842 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key
	I0912 21:30:18.261463   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt with IP's: []
	I0912 21:30:18.451474   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt ...
	I0912 21:30:18.451506   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt: {Name:mk0f640d1553a36669ab6e6b7b695492f179b963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451692   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key ...
	I0912 21:30:18.451707   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key: {Name:mk18108f1bab56e6e4bd321dfe7a25d4858d7cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451898   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:30:18.451934   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:30:18.451961   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:30:18.451983   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:30:18.452546   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:30:18.477574   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:30:18.499725   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:30:18.521000   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:30:18.542359   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:30:18.563704   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:30:18.585274   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:30:18.606928   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:30:18.629281   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:30:18.650974   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:30:18.666875   13842 ssh_runner.go:195] Run: openssl version
	I0912 21:30:18.672260   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:30:18.682723   13842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.686978   13842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.687042   13842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.692565   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:30:18.702818   13842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:30:18.706358   13842 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:30:18.706403   13842 kubeadm.go:392] StartCluster: {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:18.706469   13842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:30:18.706505   13842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:30:18.740797   13842 cri.go:89] found id: ""
	I0912 21:30:18.740875   13842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:30:18.750323   13842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:30:18.760198   13842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:30:18.771699   13842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:30:18.771722   13842 kubeadm.go:157] found existing configuration files:
	
	I0912 21:30:18.771768   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:30:18.780639   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:30:18.780710   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:30:18.790136   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:30:18.798881   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:30:18.798933   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:30:18.807668   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.815937   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:30:18.815991   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.824796   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:30:18.833290   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:30:18.833349   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:30:18.842109   13842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:30:18.894082   13842 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:30:18.894163   13842 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:30:18.987148   13842 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:30:18.987303   13842 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:30:18.987452   13842 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:30:18.997399   13842 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:30:19.070004   13842 out.go:235]   - Generating certificates and keys ...
	I0912 21:30:19.070107   13842 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:30:19.070229   13842 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:30:19.148000   13842 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:30:19.614691   13842 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:30:19.901914   13842 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:30:19.979789   13842 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:30:20.166978   13842 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:30:20.167130   13842 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.264957   13842 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:30:20.265097   13842 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.466176   13842 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:30:20.696253   13842 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:30:20.807177   13842 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:30:20.807284   13842 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:30:20.974731   13842 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:30:21.105184   13842 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:30:21.174341   13842 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:30:21.244405   13842 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:30:21.769255   13842 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:30:21.769831   13842 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:30:21.772293   13842 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:30:21.774278   13842 out.go:235]   - Booting up control plane ...
	I0912 21:30:21.774387   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:30:21.774523   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:30:21.774628   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:30:21.791849   13842 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:30:21.798525   13842 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:30:21.798599   13842 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:30:21.939016   13842 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:30:21.939132   13842 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:30:22.439761   13842 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.995176ms
	I0912 21:30:22.439860   13842 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:30:27.939433   13842 kubeadm.go:310] [api-check] The API server is healthy after 5.502232123s
	I0912 21:30:27.957923   13842 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:30:27.974582   13842 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:30:28.004043   13842 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:30:28.004250   13842 kubeadm.go:310] [mark-control-plane] Marking the node addons-694635 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:30:28.022686   13842 kubeadm.go:310] [bootstrap-token] Using token: v7rbq6.ajeibt3p6xzx9rx5
	I0912 21:30:28.024134   13842 out.go:235]   - Configuring RBAC rules ...
	I0912 21:30:28.024266   13842 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:30:28.029565   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:30:28.040289   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:30:28.043786   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:30:28.047040   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:30:28.051390   13842 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:30:28.352753   13842 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:30:28.795025   13842 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:30:29.351438   13842 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:30:29.352611   13842 kubeadm.go:310] 
	I0912 21:30:29.352681   13842 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:30:29.352688   13842 kubeadm.go:310] 
	I0912 21:30:29.352768   13842 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:30:29.352777   13842 kubeadm.go:310] 
	I0912 21:30:29.352807   13842 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:30:29.352905   13842 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:30:29.352995   13842 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:30:29.353009   13842 kubeadm.go:310] 
	I0912 21:30:29.353111   13842 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:30:29.353127   13842 kubeadm.go:310] 
	I0912 21:30:29.353199   13842 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:30:29.353208   13842 kubeadm.go:310] 
	I0912 21:30:29.353287   13842 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:30:29.353390   13842 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:30:29.353500   13842 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:30:29.353511   13842 kubeadm.go:310] 
	I0912 21:30:29.353631   13842 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:30:29.353759   13842 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:30:29.353776   13842 kubeadm.go:310] 
	I0912 21:30:29.353851   13842 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.353941   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 21:30:29.353960   13842 kubeadm.go:310] 	--control-plane 
	I0912 21:30:29.353966   13842 kubeadm.go:310] 
	I0912 21:30:29.354039   13842 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:30:29.354045   13842 kubeadm.go:310] 
	I0912 21:30:29.354116   13842 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.354200   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 21:30:29.355833   13842 kubeadm.go:310] W0912 21:30:18.865667     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356162   13842 kubeadm.go:310] W0912 21:30:18.867599     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356254   13842 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:30:29.356325   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:29.356345   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:29.358563   13842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:30:29.360118   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:30:29.371250   13842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:30:29.390372   13842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:30:29.390461   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-694635 minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-694635 minikube.k8s.io/primary=true
	I0912 21:30:29.390464   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538333   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538368   13842 ops.go:34] apiserver oom_adj: -16
	I0912 21:30:30.038483   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:30.539293   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.039133   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.538947   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.038423   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.539286   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.039390   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.127054   13842 kubeadm.go:1113] duration metric: took 3.736657835s to wait for elevateKubeSystemPrivileges
	I0912 21:30:33.127093   13842 kubeadm.go:394] duration metric: took 14.420693245s to StartCluster
	I0912 21:30:33.127114   13842 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127242   13842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:30:33.127605   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127771   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:30:33.127785   13842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:30:33.127850   13842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:30:33.127956   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.127969   13842 addons.go:69] Setting ingress-dns=true in profile "addons-694635"
	I0912 21:30:33.127972   13842 addons.go:69] Setting cloud-spanner=true in profile "addons-694635"
	I0912 21:30:33.127991   13842 addons.go:69] Setting registry=true in profile "addons-694635"
	I0912 21:30:33.127957   13842 addons.go:69] Setting yakd=true in profile "addons-694635"
	I0912 21:30:33.128001   13842 addons.go:234] Setting addon cloud-spanner=true in "addons-694635"
	I0912 21:30:33.128012   13842 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-694635"
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon registry=true in "addons-694635"
	I0912 21:30:33.128027   13842 addons.go:69] Setting metrics-server=true in profile "addons-694635"
	I0912 21:30:33.128032   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-694635"
	I0912 21:30:33.128043   13842 addons.go:234] Setting addon metrics-server=true in "addons-694635"
	I0912 21:30:33.128047   13842 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-694635"
	I0912 21:30:33.128049   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128060   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128080   13842 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:33.128102   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128386   13842 addons.go:69] Setting volcano=true in profile "addons-694635"
	I0912 21:30:33.128420   13842 addons.go:234] Setting addon volcano=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128450   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128451   13842 addons.go:69] Setting inspektor-gadget=true in profile "addons-694635"
	I0912 21:30:33.128460   13842 addons.go:69] Setting volumesnapshots=true in profile "addons-694635"
	I0912 21:30:33.128476   13842 addons.go:234] Setting addon inspektor-gadget=true in "addons-694635"
	I0912 21:30:33.128484   13842 addons.go:69] Setting default-storageclass=true in profile "addons-694635"
	I0912 21:30:33.128494   13842 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-694635"
	I0912 21:30:33.128503   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128515   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-694635"
	I0912 21:30:33.128542   13842 addons.go:234] Setting addon volumesnapshots=true in "addons-694635"
	I0912 21:30:33.128571   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128475   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128659   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128809   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128816   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128833   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon yakd=true in "addons-694635"
	I0912 21:30:33.128846   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128867   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128882   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128911   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128927   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128945   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128043   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128516   13842 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129006   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128004   13842 addons.go:69] Setting storage-provisioner=true in profile "addons-694635"
	I0912 21:30:33.129193   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129197   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129236   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.127996   13842 addons.go:234] Setting addon ingress-dns=true in "addons-694635"
	I0912 21:30:33.129298   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129535   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129586   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128481   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129193   13842 addons.go:234] Setting addon storage-provisioner=true in "addons-694635"
	I0912 21:30:33.128535   13842 addons.go:69] Setting gcp-auth=true in profile "addons-694635"
	I0912 21:30:33.129722   13842 mustload.go:65] Loading cluster: addons-694635
	I0912 21:30:33.129728   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.127957   13842 addons.go:69] Setting ingress=true in profile "addons-694635"
	I0912 21:30:33.129751   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129763   13842 addons.go:234] Setting addon ingress=true in "addons-694635"
	I0912 21:30:33.128448   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129798   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129304   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129900   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.129910   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128543   13842 addons.go:69] Setting helm-tiller=true in profile "addons-694635"
	I0912 21:30:33.129963   13842 addons.go:234] Setting addon helm-tiller=true in "addons-694635"
	I0912 21:30:33.130031   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130100   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130255   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130287   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130407   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130440   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130535   13842 out.go:177] * Verifying Kubernetes components...
	I0912 21:30:33.130801   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.141968   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:33.150069   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0912 21:30:33.150316   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0912 21:30:33.150409   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0912 21:30:33.150573   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0912 21:30:33.150789   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150884   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150941   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43391
	I0912 21:30:33.151478   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.151657   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151789   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151800   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151919   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151928   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151977   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152027   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152074   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152112   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152642   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.152664   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.152720   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152818   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152827   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.152948   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152958   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.153389   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0912 21:30:33.153693   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.153966   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0912 21:30:33.157880   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.157948   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.158145   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158164   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158243   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158260   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158318   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158329   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158341   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158598   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158814   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.158844   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.158917   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158980   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159098   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.159117   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.159471   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.159522   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159600   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.160143   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.160171   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.160628   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.163174   13842 addons.go:234] Setting addon default-storageclass=true in "addons-694635"
	I0912 21:30:33.163237   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.163679   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.163717   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.164514   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.164547   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.186987   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0912 21:30:33.187677   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.188318   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.188338   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.188699   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.188886   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.189751   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0912 21:30:33.190453   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.191030   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.191046   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.192477   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0912 21:30:33.192988   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.193332   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0912 21:30:33.193964   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.194014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.194400   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.194427   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.194717   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194732   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.194867   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194878   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.195204   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195262   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195317   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.195365   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.196144   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.196183   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.196926   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.197418   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:30:33.198461   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:33.198474   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:30:33.198481   13842 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:30:33.198514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.199826   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0912 21:30:33.200469   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.200723   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:30:33.201099   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.201116   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.201423   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.201605   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.202354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203063   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.203235   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.203301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.203325   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203365   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:30:33.203436   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.203701   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.204148   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.204529   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.204565   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.205663   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:30:33.206838   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:30:33.208115   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:30:33.209260   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:30:33.210410   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:30:33.211388   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:33.211406   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:30:33.211431   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.213932   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0912 21:30:33.214509   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.215055   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.215079   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.215339   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.215471   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.215750   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.215812   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.215831   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.216070   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.216227   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.216391   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.216522   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.218588   13842 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-694635"
	I0912 21:30:33.218632   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.218984   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.219020   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.219207   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0912 21:30:33.219636   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.220056   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.220076   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.220402   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.220894   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.220934   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.221132   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0912 21:30:33.222065   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.222569   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.222585   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.222956   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.223007   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0912 21:30:33.223665   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.223702   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.226781   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0912 21:30:33.227303   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.227791   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.227810   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.228143   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.228324   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.230191   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.232445   13842 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:30:33.233487   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.233677   13842 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.233695   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:30:33.233715   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.236503   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.236518   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.236794   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0912 21:30:33.237127   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237492   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.237525   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237561   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.237731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.238172   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.238205   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.238515   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.238691   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.238755   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0912 21:30:33.239058   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.239118   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239258   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0912 21:30:33.239484   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0912 21:30:33.239603   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239735   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.239754   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.239756   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240141   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240160   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240222   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240292   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240315   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240706   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240791   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240952   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240954   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240967   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.241651   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.241936   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.242439   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0912 21:30:33.242626   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.243111   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.243235   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.244232   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.244741   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0912 21:30:33.244824   13842 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:30:33.245133   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245135   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.245276   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.245293   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.245549   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246062   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.246078   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.246574   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246602   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.247038   13842 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:30:33.247107   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:33.247118   13842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:30:33.247136   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.247367   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.247571   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0912 21:30:33.248105   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.248613   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:33.248629   13842 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:30:33.248646   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.248652   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.248667   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.249005   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.249581   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249722   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0912 21:30:33.249729   13842 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:30:33.249843   13842 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:30:33.249905   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.249947   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249984   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.250358   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.250824   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.250839   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.250973   13842 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:33.250992   13842 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:30:33.251013   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.251167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.251211   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251334   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.251681   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.251704   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251870   13842 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.251886   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:30:33.251904   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.252556   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.252912   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.253090   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.253335   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.253982   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.254189   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254334   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.254706   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.254745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.254755   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:33.254764   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254772   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.255212   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.255240   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.255249   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	W0912 21:30:33.255329   13842 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0912 21:30:33.256835   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257248   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257768   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257790   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257818   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257834   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257862   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257877   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.258042   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258312   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258360   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258364   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258463   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258613   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.258645   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258693   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258799   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258878   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.259401   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.261562   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0912 21:30:33.261628   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0912 21:30:33.261740   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0912 21:30:33.262014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262042   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262120   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262468   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262486   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262561   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262586   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262968   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262988   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.262990   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.263127   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263521   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263555   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263697   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263722   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263750   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263947   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.268234   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.268300   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0912 21:30:33.268599   13842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.268615   13842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:30:33.268635   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.268729   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0912 21:30:33.268912   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.269386   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.269408   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.270003   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.270070   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.270285   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.270670   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.270690   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.271058   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.271281   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.272388   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.272895   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.272921   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.273067   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.273237   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.273355   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.273458   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.273740   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.274080   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.275548   13842 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:30:33.275560   13842 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:30:33.276670   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.276700   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:30:33.276722   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.276675   13842 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:30:33.278040   13842 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:33.278062   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:30:33.278081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.281119   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.281589   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.281860   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.282081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282129   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.282266   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.282598   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.281510   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282680   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282710   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282742   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282767   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282784   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282963   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.284659   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0912 21:30:33.285034   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.285737   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.285767   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.286142   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.286339   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.287706   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0912 21:30:33.287900   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0912 21:30:33.288046   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.288069   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288168   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288576   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288598   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288743   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288759   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288856   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289114   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289153   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.289708   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.290010   13842 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:30:33.290749   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.291355   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.292706   13842 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:30:33.292711   13842 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:30:33.292715   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293836   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:33.293894   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:30:33.293913   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.293963   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:30:33.293979   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.296001   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:30:33.297175   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:33.297189   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:30:33.297204   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.297379   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.297549   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298027   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298042   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298070   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298082   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298305   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298341   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298504   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298639   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298712   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298778   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299074   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299967   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.300311   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.300338   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.301763   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.301987   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.302125   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.302244   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.306121   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0912 21:30:33.306524   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.306887   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.306904   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.307338   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.307506   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	W0912 21:30:33.308193   13842 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.308214   13842 retry.go:31] will retry after 340.22316ms: ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.309320   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.311143   13842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:30:33.312425   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.312441   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:30:33.312456   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.315180   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315769   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.315798   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315962   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.316179   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.316377   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.316513   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.639453   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:33.639482   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:30:33.657578   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:33.657597   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:30:33.680952   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:33.680978   13842 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:30:33.733177   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:30:33.733181   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:33.743215   13842 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:33.743241   13842 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:30:33.762069   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:33.762098   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:30:33.782751   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.785088   13842 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:33.785111   13842 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:30:33.792263   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.836509   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.868944   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:33.868973   13842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:30:33.904688   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.911394   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:33.911420   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:30:33.913031   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.922465   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:33.922491   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:30:33.927414   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:33.927438   13842 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:30:33.941076   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.942361   13842 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:33.942383   13842 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:30:33.962765   13842 node_ready.go:35] waiting up to 6m0s for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965689   13842 node_ready.go:49] node "addons-694635" has status "Ready":"True"
	I0912 21:30:33.965712   13842 node_ready.go:38] duration metric: took 2.919714ms for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965723   13842 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:33.971996   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:33.978042   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:33.978064   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:30:34.048949   13842 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.048968   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:30:34.093153   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.093183   13842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:30:34.128832   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:34.128859   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:30:34.163298   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:34.163328   13842 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:30:34.173254   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:34.173281   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:30:34.177529   13842 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:34.177559   13842 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:30:34.215996   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:34.285198   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:34.287981   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.309345   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.315086   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:34.315113   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:30:34.354466   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.354493   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:30:34.374522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:34.374556   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:30:34.393891   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:34.393921   13842 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:30:34.502563   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:34.502588   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:30:34.584726   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:34.584760   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:30:34.607498   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.645255   13842 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:34.645280   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:30:34.718335   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:34.718361   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:30:34.783759   13842 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:34.783787   13842 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:30:34.940148   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:35.030796   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:35.030824   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:30:35.144522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.144548   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:30:35.191648   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:35.191688   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:30:35.435800   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.467895   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:35.467918   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:30:35.684867   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:35.684898   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:30:35.859788   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:35.859822   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:30:35.932925   13842 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.199703683s)
	I0912 21:30:35.932952   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.150160783s)
	I0912 21:30:35.932956   13842 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:30:35.933005   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933018   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933032   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.140722926s)
	I0912 21:30:35.933074   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933089   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933413   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933461   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933469   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933483   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933492   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933500   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933505   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933515   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933523   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933530   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933759   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.934193   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.934238   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.934260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.956608   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.956638   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.956922   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.956968   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.956988   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.992917   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:36.227480   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:36.438013   13842 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-694635" context rescaled to 1 replicas
	I0912 21:30:37.249809   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.413260898s)
	I0912 21:30:37.249867   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.249888   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250165   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250185   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:37.250200   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.250209   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250454   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250474   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.021956   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:38.703385   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.798660977s)
	I0912 21:30:38.703445   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703459   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.703792   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.703811   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.703811   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:38.703820   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703827   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.704152   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.704197   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.704207   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023100   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.110032197s)
	I0912 21:30:39.023152   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023164   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023211   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.082101447s)
	I0912 21:30:39.023263   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.807232005s)
	I0912 21:30:39.023297   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023313   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023273   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023386   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023407   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023426   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023454   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023474   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023498   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023509   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023525   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023536   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023545   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023642   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023673   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023685   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023689   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023693   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023701   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023736   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023747   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025326   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.025330   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025342   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025481   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025492   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.139049   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.139382   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.139403   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139432   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:40.261224   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:30:40.261266   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.264217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264583   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.264613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264808   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.265022   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.265208   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.265354   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:40.483338   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:40.539106   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:30:40.689076   13842 addons.go:234] Setting addon gcp-auth=true in "addons-694635"
	I0912 21:30:40.689138   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:40.689446   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.689471   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.705390   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0912 21:30:40.705838   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.706274   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.706296   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.706632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.707109   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.707133   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.722882   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0912 21:30:40.723304   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.723787   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.723806   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.724121   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.724311   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:40.725649   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:40.725862   13842 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:30:40.725882   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.728400   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.728878   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.728898   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.729103   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.729271   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.729386   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.729528   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:41.942865   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.657623757s)
	I0912 21:30:41.942920   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942926   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.654910047s)
	I0912 21:30:41.942947   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942963   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.942980   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.633591683s)
	I0912 21:30:41.942931   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943030   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.335497924s)
	I0912 21:30:41.943040   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943062   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943074   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943136   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.002958423s)
	W0912 21:30:41.943188   13842 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943217   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.50737724s)
	I0912 21:30:41.943330   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943349   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943386   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943399   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943401   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943408   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943418   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943425   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943429   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943445   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943457   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943467   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943470   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943477   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943479   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943485   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943494   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943496   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943505   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943512   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943221   13842 retry.go:31] will retry after 361.478049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943575   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943601   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943608   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943616   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943622   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.945219   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945224   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945234   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945235   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945249   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945251   13842 addons.go:475] Verifying addon registry=true in "addons-694635"
	I0912 21:30:41.945434   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945436   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945446   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945457   13842 addons.go:475] Verifying addon ingress=true in "addons-694635"
	I0912 21:30:41.945655   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945674   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945683   13842 addons.go:475] Verifying addon metrics-server=true in "addons-694635"
	I0912 21:30:41.945756   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945793   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945806   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.946676   13842 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-694635 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:30:41.946688   13842 out.go:177] * Verifying registry addon...
	I0912 21:30:41.948418   13842 out.go:177] * Verifying ingress addon...
	I0912 21:30:41.949076   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:30:41.950349   13842 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:30:41.954743   13842 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:30:41.954774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:41.960928   13842 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:30:41.960949   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.305973   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:42.467232   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.477555   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.764449   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:42.797806   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.570260767s)
	I0912 21:30:42.797869   13842 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.071984177s)
	I0912 21:30:42.797869   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.797989   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798300   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798313   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798323   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.798331   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798617   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798639   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798649   13842 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:42.799295   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:42.800145   13842 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:30:42.801601   13842 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:30:42.802781   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:30:42.803047   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:42.803064   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:30:42.817988   13842 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:30:42.818009   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.900221   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:42.900257   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:30:42.960615   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.960989   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.009576   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.009605   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:30:43.147089   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.320966   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.453136   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.454373   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.808102   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.953362   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.958697   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.162942   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.856921696s)
	I0912 21:30:44.163000   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163016   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163309   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.163366   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.163381   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163328   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.163393   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163848   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.164957   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.164983   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.378590   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.427113   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.279974028s)
	I0912 21:30:44.427173   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427193   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427495   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427544   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.427559   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427568   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427499   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427772   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427798   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427814   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.429338   13842 addons.go:475] Verifying addon gcp-auth=true in "addons-694635"
	I0912 21:30:44.431064   13842 out.go:177] * Verifying gcp-auth addon...
	I0912 21:30:44.432961   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:30:44.468784   13842 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:30:44.468806   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.469261   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.469425   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.809517   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.936881   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.953105   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.954618   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.978466   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:45.312534   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.436603   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.454472   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.458065   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:45.478156   13842 pod_ready.go:98] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.67 HostIPs:[{IP:192.168.39.
67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0009cbb30}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478190   13842 pod_ready.go:82] duration metric: took 11.506167543s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	E0912 21:30:45.478205   13842 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.67 HostIPs:[{IP:192.168.39.67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0009cbb30}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478217   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486926   13842 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.486961   13842 pod_ready.go:82] duration metric: took 8.733099ms for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486974   13842 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493880   13842 pod_ready.go:93] pod "etcd-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.493917   13842 pod_ready.go:82] duration metric: took 6.934283ms for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493933   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500231   13842 pod_ready.go:93] pod "kube-apiserver-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.500262   13842 pod_ready.go:82] duration metric: took 6.319725ms for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500276   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508921   13842 pod_ready.go:93] pod "kube-controller-manager-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.508952   13842 pod_ready.go:82] duration metric: took 8.661364ms for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508966   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.807845   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.875520   13842 pod_ready.go:93] pod "kube-proxy-4hcfx" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.875543   13842 pod_ready.go:82] duration metric: took 366.569724ms for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.875552   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.936184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.953664   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.955104   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.275644   13842 pod_ready.go:93] pod "kube-scheduler-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:46.275666   13842 pod_ready.go:82] duration metric: took 400.107483ms for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:46.275674   13842 pod_ready.go:39] duration metric: took 12.309938834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:46.275689   13842 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:30:46.275751   13842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:46.301756   13842 api_server.go:72] duration metric: took 13.173948128s to wait for apiserver process to appear ...
	I0912 21:30:46.301775   13842 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:30:46.301792   13842 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0912 21:30:46.305735   13842 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0912 21:30:46.306725   13842 api_server.go:141] control plane version: v1.31.1
	I0912 21:30:46.306743   13842 api_server.go:131] duration metric: took 4.962021ms to wait for apiserver health ...
	I0912 21:30:46.306750   13842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:30:46.309045   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.436328   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.454711   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.455101   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.480691   13842 system_pods.go:59] 18 kube-system pods found
	I0912 21:30:46.480719   13842 system_pods.go:61] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.480728   13842 system_pods.go:61] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.480735   13842 system_pods.go:61] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.480742   13842 system_pods.go:61] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.480746   13842 system_pods.go:61] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.480750   13842 system_pods.go:61] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.480754   13842 system_pods.go:61] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.480761   13842 system_pods.go:61] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.480765   13842 system_pods.go:61] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.480770   13842 system_pods.go:61] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.480775   13842 system_pods.go:61] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.480784   13842 system_pods.go:61] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.480794   13842 system_pods.go:61] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.480800   13842 system_pods.go:61] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.480808   13842 system_pods.go:61] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480814   13842 system_pods.go:61] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480818   13842 system_pods.go:61] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.480823   13842 system_pods.go:61] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.480830   13842 system_pods.go:74] duration metric: took 174.075986ms to wait for pod list to return data ...
	I0912 21:30:46.480840   13842 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:30:46.676516   13842 default_sa.go:45] found service account: "default"
	I0912 21:30:46.676544   13842 default_sa.go:55] duration metric: took 195.698229ms for default service account to be created ...
	I0912 21:30:46.676555   13842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:30:46.808312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.882566   13842 system_pods.go:86] 18 kube-system pods found
	I0912 21:30:46.882593   13842 system_pods.go:89] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.882601   13842 system_pods.go:89] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.882607   13842 system_pods.go:89] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.882615   13842 system_pods.go:89] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.882619   13842 system_pods.go:89] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.882624   13842 system_pods.go:89] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.882627   13842 system_pods.go:89] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.882632   13842 system_pods.go:89] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.882638   13842 system_pods.go:89] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.882642   13842 system_pods.go:89] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.882647   13842 system_pods.go:89] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.882653   13842 system_pods.go:89] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.882659   13842 system_pods.go:89] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.882665   13842 system_pods.go:89] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.882670   13842 system_pods.go:89] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882678   13842 system_pods.go:89] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882683   13842 system_pods.go:89] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.882691   13842 system_pods.go:89] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.882697   13842 system_pods.go:126] duration metric: took 206.137533ms to wait for k8s-apps to be running ...
	I0912 21:30:46.882703   13842 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:30:46.882743   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:30:46.925829   13842 system_svc.go:56] duration metric: took 43.114101ms WaitForService to wait for kubelet
	I0912 21:30:46.925861   13842 kubeadm.go:582] duration metric: took 13.798055946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:46.925881   13842 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:30:46.936949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.954044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.954652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.077031   13842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:30:47.077069   13842 node_conditions.go:123] node cpu capacity is 2
	I0912 21:30:47.077086   13842 node_conditions.go:105] duration metric: took 151.197367ms to run NodePressure ...
	I0912 21:30:47.077102   13842 start.go:241] waiting for startup goroutines ...
	I0912 21:30:47.306659   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.436922   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.454133   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:47.455284   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.807878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.936979   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.954401   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.955301   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.308026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.436963   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:48.456522   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.457189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:48.807641   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.086497   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.086504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.087121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.307899   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.436969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.452710   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.455147   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.808000   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.940753   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.971990   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.972275   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.306737   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.436059   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.452909   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.455902   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:50.807091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.935993   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.953464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.954524   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.308257   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.436479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.452352   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.453795   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.807739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.936798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.953151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.955301   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.436742   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.452578   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.454290   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.808168   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.936339   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.953730   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.307714   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.438307   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.454049   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.454999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.809141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.937475   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.953075   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.956110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.309453   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.437498   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.452997   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.454232   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.808290   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.937121   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.953554   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.954933   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.308403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.436189   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.453910   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.455288   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.808688   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.936880   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.953026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.954088   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.307678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.438816   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.453756   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.454145   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.806670   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.938510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.953471   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.956690   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.307668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.436695   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.456044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.456392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.808216   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.936313   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.953978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.307798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.437125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.454751   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.457211   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.807968   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.937010   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.953141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.959276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.308291   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.436266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.453642   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:59.455378   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.808750   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.937681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.955468   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.955848   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.308635   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.436913   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.453130   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.454282   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:00.807146   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.936739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.953015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.306985   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.436195   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.453123   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.454341   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.807013   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.936537   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.952370   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.954597   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.307157   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.436510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.452446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.454782   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.807320   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.983700   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.983759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.984366   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.307411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.436395   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.453271   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.454447   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.807454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.936777   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.952668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.955100   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.307745   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.436831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.452778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.455238   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.807569   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.936849   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.953099   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.955331   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.307263   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.436369   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.455274   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.455523   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.807911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.936890   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.953011   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.954859   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.308088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.436094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:06.453015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:06.454185   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.807536   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.937265   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.294221   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.294459   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.394402   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.436598   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.452707   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.454367   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.807204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.936209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.953204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.307069   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.452844   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.456371   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.807416   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.936870   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.952721   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.954434   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.436768   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.452696   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.454244   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.806900   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.936202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.952947   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.954077   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.310715   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.436442   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.453775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.454308   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.807926   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.936446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.952829   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.954777   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.307638   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.437017   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.455266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.455579   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.808062   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.936788   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.953110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.955323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.309018   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.437559   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.452853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.455591   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.807821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.936153   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.952946   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.955049   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.308125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.436685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.453405   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.454409   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.808343   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.936831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.953008   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.955615   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.307410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.439286   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.460392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.461660   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.808029   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.937360   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.953551   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.955229   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.308853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.802413   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.802546   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.802929   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.806810   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.935781   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.953409   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.954622   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.307574   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.436906   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.454204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:16.454314   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.807151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.936285   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.954876   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.954961   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.308273   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.436690   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.452851   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.454581   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:17.808378   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.937233   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.953506   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.954633   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.307978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.438381   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.452394   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.454983   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.808450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.937057   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.954873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.954917   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.307860   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.443523   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.451685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.454121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.808677   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.942749   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.954209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.955400   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.308312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.436764   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.453650   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.455934   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.809185   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.937034   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.953356   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.954469   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.306918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.436565   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.452318   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.454075   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.807969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.936459   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.952911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.954462   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.308342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:22.436293   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:22.454954   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:22.455186   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.807592   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.028341   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.028457   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.028520   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.307479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.436556   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.453994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.454062   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.807759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.936678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.953231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.954392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.307358   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.436892   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.453479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.455733   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.807681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.936504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.952491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.955015   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.307494   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:25.437838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:25.454660   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:25.455196   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.806376   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.169088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.169141   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.169576   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.308047   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.438798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.454085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.454874   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.808511   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.936179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.953217   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.955020   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.307867   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.436967   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.453064   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:27.454221   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.808241   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.936433   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.954010   13842 kapi.go:107] duration metric: took 46.004930815s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:31:27.954819   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.308179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.436505   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.455109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.807480   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.936668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.954245   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.306669   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.436989   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.455085   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.817843   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.937454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.956102   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.308652   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.437396   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.454614   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.807604   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.936840   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.954423   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.308447   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.437404   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.454276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.807324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.936952   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.954363   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.306415   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.437242   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.454652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.807329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.936869   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.954340   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.436873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.454653   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.810231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.937220   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.954601   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.307392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.958058   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.958295   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.958411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.961259   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.961741   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.307464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.437024   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.455092   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.808111   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.937085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.955030   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.307832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.438403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.457831   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.808182   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.939647   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.955818   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.307778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.436832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.454110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.807859   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.936514   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.955016   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.307838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.436456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.454686   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.808567   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.941164   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.956269   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.307122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:39.437203   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:39.454703   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.078488   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.079334   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.079654   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.307212   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.436878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.538252   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.807485   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.938491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.955935   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.308214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.436295   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.454533   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.807705   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.943420   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.954960   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.308025   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.439095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.454338   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.807582   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.937122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.955099   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.406903   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.436443   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.455666   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.807519   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.937682   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.954323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.306738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.436834   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.454320   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.815595   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.938314   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.954595   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.308036   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.455327   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.807991   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.962606   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.967707   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.436949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.455549   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.807608   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.937589   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.958969   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.307738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.436911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.454432   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.811530   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.936953   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.955680   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.308202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.437342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.456109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.815410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.936379   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.955189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.307918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.436235   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.454487   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.812324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.936703   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.954166   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.308053   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.455802   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.808329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.936571   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.955407   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.307733   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.438936   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.474999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.807267   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.937095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.955402   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.307348   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.436276   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.455029   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.807657   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.937207   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.954953   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.307507   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.437088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.454370   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.807469   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.937040   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.954745   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.307579   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.437891   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.757207   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.809668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.937739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.958776   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:55.307785   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.436060   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:55.454674   13842 kapi.go:107] duration metric: took 1m13.504323658s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:31:55.807214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.936450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.308210   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.528172   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.807634   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.936775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.307995   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.436434   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.817862   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.936850   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.307245   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.436887   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.808853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.936774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.307234   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.808299   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.935885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.307456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:00.437156   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.964683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.965821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.312456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.436422   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:01.808885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.937181   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:02.318607   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:02.437876   13842 kapi.go:107] duration metric: took 1m18.004909184s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:32:02.439347   13842 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-694635 cluster.
	I0912 21:32:02.440699   13842 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:32:02.441821   13842 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:32:02.807994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.308094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.808683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.307312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.808877   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.308455   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.808430   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.316091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.808681   13842 kapi.go:107] duration metric: took 1m24.005897654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:32:06.810775   13842 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0912 21:32:06.812317   13842 addons.go:510] duration metric: took 1m33.684465733s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0912 21:32:06.812359   13842 start.go:246] waiting for cluster config update ...
	I0912 21:32:06.812380   13842 start.go:255] writing updated cluster config ...
	I0912 21:32:06.812657   13842 ssh_runner.go:195] Run: rm -f paused
	I0912 21:32:06.863917   13842 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:32:06.865782   13842 out.go:177] * Done! kubectl is now configured to use "addons-694635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.130765963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177422130736198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcad5dc6-eb09-4963-8f5f-859a030b4a65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.131506415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3b4e241-7d73-47d7-934d-7ad93c15748e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.131752583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3b4e241-7d73-47d7-934d-7ad93c15748e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.132327118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172617
6691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decf
a1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3b4e241-7d73-47d7-934d-7ad93c15748e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.177238787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87e1f756-f6b6-478c-ba96-baf28d210850 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.177315455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87e1f756-f6b6-478c-ba96-baf28d210850 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.178366286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fae3cf7-be64-4a0f-8567-2fa165f93633 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.179860980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177422179833763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fae3cf7-be64-4a0f-8567-2fa165f93633 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.180469109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=907f910a-84ed-46ac-89b3-3b0889499f4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.180557177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=907f910a-84ed-46ac-89b3-3b0889499f4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.180837473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172617
6691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decf
a1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=907f910a-84ed-46ac-89b3-3b0889499f4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.214730565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae132a6d-684d-4943-a728-685b62e6f2e0 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.214811822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae132a6d-684d-4943-a728-685b62e6f2e0 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.216235063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5924f918-1a42-43cb-9aea-e66a54c38d23 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.217379938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177422217353349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5924f918-1a42-43cb-9aea-e66a54c38d23 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.217930463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ae625b9-7a22-496f-89fb-d75c95dafc8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.217988453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ae625b9-7a22-496f-89fb-d75c95dafc8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.218250129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172617
6691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decf
a1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ae625b9-7a22-496f-89fb-d75c95dafc8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.259100863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6607ab04-006a-453d-b9eb-20d52af181fc name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.259176408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6607ab04-006a-453d-b9eb-20d52af181fc name=/runtime.v1.RuntimeService/Version
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.260397832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=772bda88-2cf0-4b4f-99f1-ca4ef08687a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.261779817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177422261753064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=772bda88-2cf0-4b4f-99f1-ca4ef08687a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.262421021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5152f7bf-8953-45ca-bbe7-6182b4de7c9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.262530104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5152f7bf-8953-45ca-bbe7-6182b4de7c9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:43:42 addons-694635 crio[662]: time="2024-09-12 21:43:42.262802030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfb52a51b0154de55fe552d30a59e9bfc60f381b987e527d0067b5e3efdf493,PodSandboxId:0fc6f924b3914897ccb68df15de8825f3af5357060d2e98ea91e4cac85c89108,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700317205582,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-75vhq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e473a3e1-2d2f-4981-993e-47902c4c573c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd29448ea314df05ee9e96a683c055a9f7ce799e6b86e7d531105e4981c5df9,PodSandboxId:d4d9cc832e450785d0e1b4460e85a8a3a592d8778caa1c00cbdaf238b2d5e5e6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726176700177295239,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gf4cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9f8be3b2-df3b-4d54-9d3f-f37cb358b701,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172617
6691385084595,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decf
a1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5152f7bf-8953-45ca-bbe7-6182b4de7c9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68df5018ee9b9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   1ae8f2e321f0f       hello-world-app-55bf9c44b4-8wzs4
	15f49cc7f3e63       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   9d3e688e943f8       nginx
	224662c30f376       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   e71b5d7408e65       gcp-auth-89d5ffd79-px7q4
	cbfb52a51b015       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   0fc6f924b3914       ingress-nginx-admission-patch-75vhq
	5bd29448ea314       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   d4d9cc832e450       ingress-nginx-admission-create-gf4cr
	01c0d8468e1a5       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   f1b6fca0a1b4a       metrics-server-84c5f94fbc-v4b7g
	c63491974a86d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   bb6d26e812401       storage-provisioner
	1a9fbfbdc2579       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   52798c65c361b       coredns-7c65d6cfc9-rpsn9
	aa4b1b8007598       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   00dce38c65e40       kube-proxy-4hcfx
	daff578fb9bc4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   af67c23417313       kube-scheduler-addons-694635
	04006273204a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   8f5fcc20744c5       kube-apiserver-addons-694635
	3ad45dbfb61b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   e1566071cac6e       etcd-addons-694635
	5c2e331dbfead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   8ab56f691eeea       kube-controller-manager-addons-694635
	
	
	==> coredns [1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b] <==
	[INFO] 127.0.0.1:55335 - 14088 "HINFO IN 1593280896951240425.6479746786649468559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009751103s
	[INFO] 10.244.0.8:55681 - 3740 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000376198s
	[INFO] 10.244.0.8:55681 - 64158 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228403s
	[INFO] 10.244.0.8:37781 - 47777 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000252945s
	[INFO] 10.244.0.8:37781 - 7076 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147556s
	[INFO] 10.244.0.8:41819 - 26826 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000226016s
	[INFO] 10.244.0.8:41819 - 4808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010299s
	[INFO] 10.244.0.8:36322 - 25419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092438s
	[INFO] 10.244.0.8:36322 - 47689 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000194058s
	[INFO] 10.244.0.8:52027 - 25674 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158473s
	[INFO] 10.244.0.8:52027 - 28495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211396s
	[INFO] 10.244.0.8:60142 - 5226 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072599s
	[INFO] 10.244.0.8:60142 - 8039 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122511s
	[INFO] 10.244.0.8:50355 - 29794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050766s
	[INFO] 10.244.0.8:50355 - 16480 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152532s
	[INFO] 10.244.0.8:38422 - 32454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054761s
	[INFO] 10.244.0.8:38422 - 36548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127267s
	[INFO] 10.244.0.22:60865 - 4263 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000466894s
	[INFO] 10.244.0.22:39371 - 54519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000098861s
	[INFO] 10.244.0.22:41806 - 53233 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138737s
	[INFO] 10.244.0.22:36774 - 22315 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063899s
	[INFO] 10.244.0.22:57836 - 41268 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128874s
	[INFO] 10.244.0.22:60541 - 59176 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161626s
	[INFO] 10.244.0.22:53240 - 37260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004441249s
	[INFO] 10.244.0.22:51419 - 44769 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004780269s
	
	
	==> describe nodes <==
	Name:               addons-694635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-694635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-694635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-694635
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-694635
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:43:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:41:31 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:41:31 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:41:31 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:41:31 +0000   Thu, 12 Sep 2024 21:30:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    addons-694635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 13b099cf91f8442286dd9014ad34a5eb
	  System UUID:                13b099cf-91f8-4422-86dd-9014ad34a5eb
	  Boot ID:                    e094f473-e531-4253-a8aa-4f2a067e9156
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-8wzs4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-px7q4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-rpsn9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-694635                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-694635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-694635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4hcfx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-694635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-v4b7g          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-694635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-694635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-694635 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-694635 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-694635 event: Registered Node addons-694635 in Controller
	
	
	==> dmesg <==
	[Sep12 21:31] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.489065] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.929989] kauditd_printk_skb: 27 callbacks suppressed
	[ +10.095844] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.073125] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.683105] kauditd_printk_skb: 81 callbacks suppressed
	[  +7.372236] kauditd_printk_skb: 32 callbacks suppressed
	[Sep12 21:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.856647] kauditd_printk_skb: 16 callbacks suppressed
	[ +29.701828] kauditd_printk_skb: 40 callbacks suppressed
	[Sep12 21:33] kauditd_printk_skb: 30 callbacks suppressed
	[Sep12 21:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.238101] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.551734] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.393117] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.485586] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.071553] kauditd_printk_skb: 25 callbacks suppressed
	[ +10.586398] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.540652] kauditd_printk_skb: 43 callbacks suppressed
	[Sep12 21:41] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.241626] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.193519] kauditd_printk_skb: 21 callbacks suppressed
	[Sep12 21:43] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863] <==
	{"level":"info","ts":"2024-09-12T21:32:27.442536Z","caller":"traceutil/trace.go:171","msg":"trace[1053230552] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"376.793189ms","start":"2024-09-12T21:32:27.065736Z","end":"2024-09-12T21:32:27.442529Z","steps":["trace[1053230552] 'process raft request'  (duration: 376.46254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:32:27.442634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T21:32:27.065721Z","time spent":"376.837334ms","remote":"127.0.0.1:46902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1237 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-12T21:32:27.442747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.721986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:32:27.442779Z","caller":"traceutil/trace.go:171","msg":"trace[1254642931] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1240; }","duration":"263.753575ms","start":"2024-09-12T21:32:27.179019Z","end":"2024-09-12T21:32:27.442773Z","steps":["trace[1254642931] 'agreement among raft nodes before linearized reading'  (duration: 263.705568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:32:27.442948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.756935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-09-12T21:32:27.442984Z","caller":"traceutil/trace.go:171","msg":"trace[1578547455] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1240; }","duration":"262.791577ms","start":"2024-09-12T21:32:27.180186Z","end":"2024-09-12T21:32:27.442977Z","steps":["trace[1578547455] 'agreement among raft nodes before linearized reading'  (duration: 262.70651ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746598Z","caller":"traceutil/trace.go:171","msg":"trace[1981957924] linearizableReadLoop","detail":"{readStateIndex:2127; appliedIndex:2126; }","duration":"133.477931ms","start":"2024-09-12T21:40:19.613083Z","end":"2024-09-12T21:40:19.746561Z","steps":["trace[1981957924] 'read index received'  (duration: 133.318567ms)","trace[1981957924] 'applied index is now lower than readState.Index'  (duration: 158.878µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:19.746825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.6822ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:19.746858Z","caller":"traceutil/trace.go:171","msg":"trace[1975095780] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1989; }","duration":"133.772244ms","start":"2024-09-12T21:40:19.613077Z","end":"2024-09-12T21:40:19.746850Z","steps":["trace[1975095780] 'agreement among raft nodes before linearized reading'  (duration: 133.667003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746680Z","caller":"traceutil/trace.go:171","msg":"trace[784585044] transaction","detail":"{read_only:false; response_revision:1989; number_of_response:1; }","duration":"282.702863ms","start":"2024-09-12T21:40:19.463956Z","end":"2024-09-12T21:40:19.746659Z","steps":["trace[784585044] 'process raft request'  (duration: 282.48487ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:24.366865Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-12T21:40:24.408830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"41.110259ms","hash":3946649684,"current-db-size-bytes":6709248,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-12T21:40:24.408900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3946649684,"revision":1527,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T21:40:40.024996Z","caller":"traceutil/trace.go:171","msg":"trace[2045705986] transaction","detail":"{read_only:false; response_revision:2179; number_of_response:1; }","duration":"188.170203ms","start":"2024-09-12T21:40:39.836812Z","end":"2024-09-12T21:40:40.024982Z","steps":["trace[2045705986] 'process raft request'  (duration: 187.576243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.025569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.896897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.025770Z","caller":"traceutil/trace.go:171","msg":"trace[1651034224] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:2179; }","duration":"185.132257ms","start":"2024-09-12T21:40:39.840570Z","end":"2024-09-12T21:40:40.025702Z","steps":["trace[1651034224] 'agreement among raft nodes before linearized reading'  (duration: 184.872808ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.027031Z","caller":"traceutil/trace.go:171","msg":"trace[737774189] linearizableReadLoop","detail":"{readStateIndex:2324; appliedIndex:2323; }","duration":"184.07988ms","start":"2024-09-12T21:40:39.840574Z","end":"2024-09-12T21:40:40.024654Z","steps":["trace[737774189] 'read index received'  (duration: 183.713847ms)","trace[737774189] 'applied index is now lower than readState.Index'  (duration: 365.525µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:40.027339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.934654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-12T21:40:40.027410Z","caller":"traceutil/trace.go:171","msg":"trace[100333331] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2179; }","duration":"162.010795ms","start":"2024-09-12T21:40:39.865389Z","end":"2024-09-12T21:40:40.027400Z","steps":["trace[100333331] 'agreement among raft nodes before linearized reading'  (duration: 161.762163ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.220357Z","caller":"traceutil/trace.go:171","msg":"trace[1115025117] linearizableReadLoop","detail":"{readStateIndex:2325; appliedIndex:2324; }","duration":"186.564755ms","start":"2024-09-12T21:40:40.033761Z","end":"2024-09-12T21:40:40.220326Z","steps":["trace[1115025117] 'read index received'  (duration: 186.518061ms)","trace[1115025117] 'applied index is now lower than readState.Index'  (duration: 45.997µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:40:40.220626Z","caller":"traceutil/trace.go:171","msg":"trace[1874224401] transaction","detail":"{read_only:false; response_revision:2180; number_of_response:1; }","duration":"187.429481ms","start":"2024-09-12T21:40:40.033184Z","end":"2024-09-12T21:40:40.220614Z","steps":["trace[1874224401] 'process raft request'  (duration: 186.678055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.086416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220822Z","caller":"traceutil/trace.go:171","msg":"trace[838300705] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2180; }","duration":"187.131549ms","start":"2024-09-12T21:40:40.033683Z","end":"2024-09-12T21:40:40.220815Z","steps":["trace[838300705] 'agreement among raft nodes before linearized reading'  (duration: 187.072562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.744825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220957Z","caller":"traceutil/trace.go:171","msg":"trace[524765721] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:2180; }","duration":"186.778424ms","start":"2024-09-12T21:40:40.034173Z","end":"2024-09-12T21:40:40.220952Z","steps":["trace[524765721] 'agreement among raft nodes before linearized reading'  (duration: 186.735141ms)"],"step_count":1}
	
	
	==> gcp-auth [224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765] <==
	2024/09/12 21:32:07 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:13 Ready to marshal response ...
	2024/09/12 21:40:13 Ready to write response ...
	2024/09/12 21:40:14 Ready to marshal response ...
	2024/09/12 21:40:14 Ready to write response ...
	2024/09/12 21:40:20 Ready to marshal response ...
	2024/09/12 21:40:20 Ready to write response ...
	2024/09/12 21:40:28 Ready to marshal response ...
	2024/09/12 21:40:28 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:36 Ready to marshal response ...
	2024/09/12 21:40:36 Ready to write response ...
	2024/09/12 21:41:13 Ready to marshal response ...
	2024/09/12 21:41:13 Ready to write response ...
	2024/09/12 21:43:32 Ready to marshal response ...
	2024/09/12 21:43:32 Ready to write response ...
	
	
	==> kernel <==
	 21:43:42 up 13 min,  0 users,  load average: 0.21, 0.45, 0.37
	Linux addons-694635 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600] <==
	E0912 21:32:39.672883       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.685803       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.712873       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	I0912 21:32:39.805428       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0912 21:40:26.205439       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 21:40:33.330977       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.67.92"}
	E0912 21:40:44.623623       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0912 21:40:56.039633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.039692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.072862       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.072917       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.085872       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.085946       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.110100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.110148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.134562       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.134998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:40:57.111095       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:40:57.135378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 21:40:57.232817       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0912 21:41:09.586605       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:41:10.631785       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:41:13.128356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:41:13.316851       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.65.172"}
	I0912 21:43:32.312526       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.215.31"}
	
	
	==> kube-controller-manager [5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba] <==
	E0912 21:42:09.762661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:18.274609       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:18.274823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:33.705860       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:33.705918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:40.650869       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:40.651061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:42:59.388765       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:42:59.388834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:43:03.576935       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:43:03.577084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:43:18.645998       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:43:18.646055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:43:28.069529       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:43:28.069643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:43:32.147653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.711708ms"
	I0912 21:43:32.165761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.170236ms"
	I0912 21:43:32.167652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="82.829µs"
	I0912 21:43:34.272666       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0912 21:43:34.278382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.858µs"
	I0912 21:43:34.285055       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0912 21:43:35.335953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.246798ms"
	I0912 21:43:35.336082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="90.288µs"
	W0912 21:43:39.422700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:43:39.422827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:30:36.071774       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:30:36.082467       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0912 21:30:36.082639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:30:36.149367       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:30:36.149399       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:30:36.149432       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:30:36.161798       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:30:36.164947       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:30:36.164965       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:30:36.177240       1 config.go:199] "Starting service config controller"
	I0912 21:30:36.177256       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:30:36.177281       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:30:36.177291       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:30:36.180184       1 config.go:328] "Starting node config controller"
	I0912 21:30:36.180198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:30:36.277929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:30:36.278089       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:30:36.286430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546] <==
	W0912 21:30:25.943462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:30:25.943544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:25.943641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:30:25.943723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.867246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:30:26.867357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.882410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:30:26.882590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.937816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:30:26.937964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.988234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:26.988387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.028755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:30:27.028982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.065104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:30:27.065402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.081373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.081599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.089933       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:30:27.090023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:30:27.106816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.106970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.187917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.188172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 21:30:29.715653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:43:32 addons-694635 kubelet[1201]: I0912 21:43:32.213367    1201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfnpl\" (UniqueName: \"kubernetes.io/projected/c11e9909-be91-42a2-973f-3ec56c134bed-kube-api-access-hfnpl\") pod \"hello-world-app-55bf9c44b4-8wzs4\" (UID: \"c11e9909-be91-42a2-973f-3ec56c134bed\") " pod="default/hello-world-app-55bf9c44b4-8wzs4"
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.296167    1201 scope.go:117] "RemoveContainer" containerID="6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6"
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.314760    1201 scope.go:117] "RemoveContainer" containerID="6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6"
	Sep 12 21:43:33 addons-694635 kubelet[1201]: E0912 21:43:33.315256    1201 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6\": container with ID starting with 6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6 not found: ID does not exist" containerID="6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6"
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.315294    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6"} err="failed to get container status \"6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6\": rpc error: code = NotFound desc = could not find container \"6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6\": container with ID starting with 6de976d4d55c054733ae5270b7c84bfb4c238d6df44ac10ca7189e7a208c59b6 not found: ID does not exist"
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.322738    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpn4k\" (UniqueName: \"kubernetes.io/projected/22649b3c-8428-4122-bf69-ab76864aaa7e-kube-api-access-kpn4k\") pod \"22649b3c-8428-4122-bf69-ab76864aaa7e\" (UID: \"22649b3c-8428-4122-bf69-ab76864aaa7e\") "
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.324599    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22649b3c-8428-4122-bf69-ab76864aaa7e-kube-api-access-kpn4k" (OuterVolumeSpecName: "kube-api-access-kpn4k") pod "22649b3c-8428-4122-bf69-ab76864aaa7e" (UID: "22649b3c-8428-4122-bf69-ab76864aaa7e"). InnerVolumeSpecName "kube-api-access-kpn4k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:43:33 addons-694635 kubelet[1201]: I0912 21:43:33.423970    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kpn4k\" (UniqueName: \"kubernetes.io/projected/22649b3c-8428-4122-bf69-ab76864aaa7e-kube-api-access-kpn4k\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:43:34 addons-694635 kubelet[1201]: I0912 21:43:34.645956    1201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22649b3c-8428-4122-bf69-ab76864aaa7e" path="/var/lib/kubelet/pods/22649b3c-8428-4122-bf69-ab76864aaa7e/volumes"
	Sep 12 21:43:34 addons-694635 kubelet[1201]: I0912 21:43:34.646363    1201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8be3b2-df3b-4d54-9d3f-f37cb358b701" path="/var/lib/kubelet/pods/9f8be3b2-df3b-4d54-9d3f-f37cb358b701/volumes"
	Sep 12 21:43:34 addons-694635 kubelet[1201]: I0912 21:43:34.646800    1201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e473a3e1-2d2f-4981-993e-47902c4c573c" path="/var/lib/kubelet/pods/e473a3e1-2d2f-4981-993e-47902c4c573c/volumes"
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.558264    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f65472b6-e81f-4c58-ab81-fccf64b4d231-webhook-cert\") pod \"f65472b6-e81f-4c58-ab81-fccf64b4d231\" (UID: \"f65472b6-e81f-4c58-ab81-fccf64b4d231\") "
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.558311    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6x2r\" (UniqueName: \"kubernetes.io/projected/f65472b6-e81f-4c58-ab81-fccf64b4d231-kube-api-access-w6x2r\") pod \"f65472b6-e81f-4c58-ab81-fccf64b4d231\" (UID: \"f65472b6-e81f-4c58-ab81-fccf64b4d231\") "
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.560887    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65472b6-e81f-4c58-ab81-fccf64b4d231-kube-api-access-w6x2r" (OuterVolumeSpecName: "kube-api-access-w6x2r") pod "f65472b6-e81f-4c58-ab81-fccf64b4d231" (UID: "f65472b6-e81f-4c58-ab81-fccf64b4d231"). InnerVolumeSpecName "kube-api-access-w6x2r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.561605    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65472b6-e81f-4c58-ab81-fccf64b4d231-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f65472b6-e81f-4c58-ab81-fccf64b4d231" (UID: "f65472b6-e81f-4c58-ab81-fccf64b4d231"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.658685    1201 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f65472b6-e81f-4c58-ab81-fccf64b4d231-webhook-cert\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:43:37 addons-694635 kubelet[1201]: I0912 21:43:37.658740    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w6x2r\" (UniqueName: \"kubernetes.io/projected/f65472b6-e81f-4c58-ab81-fccf64b4d231-kube-api-access-w6x2r\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:43:38 addons-694635 kubelet[1201]: I0912 21:43:38.328600    1201 scope.go:117] "RemoveContainer" containerID="08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d"
	Sep 12 21:43:38 addons-694635 kubelet[1201]: I0912 21:43:38.346965    1201 scope.go:117] "RemoveContainer" containerID="08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d"
	Sep 12 21:43:38 addons-694635 kubelet[1201]: E0912 21:43:38.347448    1201 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d\": container with ID starting with 08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d not found: ID does not exist" containerID="08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d"
	Sep 12 21:43:38 addons-694635 kubelet[1201]: I0912 21:43:38.347595    1201 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d"} err="failed to get container status \"08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d\": rpc error: code = NotFound desc = could not find container \"08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d\": container with ID starting with 08b47558fe95c85582c7dba39a0d6d3720b7bbfafe1678eac94681c51b92e11d not found: ID does not exist"
	Sep 12 21:43:38 addons-694635 kubelet[1201]: I0912 21:43:38.645892    1201 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65472b6-e81f-4c58-ab81-fccf64b4d231" path="/var/lib/kubelet/pods/f65472b6-e81f-4c58-ab81-fccf64b4d231/volumes"
	Sep 12 21:43:38 addons-694635 kubelet[1201]: E0912 21:43:38.646754    1201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f"
	Sep 12 21:43:39 addons-694635 kubelet[1201]: E0912 21:43:39.295551    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177419294951749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:43:39 addons-694635 kubelet[1201]: E0912 21:43:39.295755    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177419294951749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38] <==
	I0912 21:30:40.634278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:30:40.654230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:30:40.654289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:30:40.672312       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:30:40.672455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	I0912 21:30:40.672557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad54721b-5319-42a0-af50-593f2d28e853", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2 became leader
	I0912 21:30:40.772629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-694635 -n addons-694635
helpers_test.go:261: (dbg) Run:  kubectl --context addons-694635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-694635 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-694635 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-694635/192.168.39.67
	Start Time:       Thu, 12 Sep 2024 21:32:07 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mw2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c9mw2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-694635
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m46s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    91s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (346.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.973191ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00420078s
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (65.567855ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 10m36.109388785s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (63.227629ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 10m38.431302494s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (61.484823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 10m43.29545919s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (92.839299ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 10m49.103494455s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (62.918877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 10m58.027201082s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (61.06515ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 11m7.755159551s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (62.963699ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 11m24.370953214s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (64.864569ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 11m49.367424206s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (60.55268ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 12m30.169642376s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (62.607258ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 13m47.883880724s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (61.801898ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 14m43.794439454s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (69.546669ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 15m36.408801788s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-694635 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-694635 top pods -n kube-system: exit status 1 (64.616248ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-rpsn9, age: 16m14.759192934s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-694635 -n addons-694635
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 logs -n 25: (1.369980582s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-618378                                                                     | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-976166                                                                     | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | binary-mirror-318498                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39999                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-318498                                                                     | binary-mirror-318498 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-694635 --wait=true                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-694635 ssh cat                                                                       | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | /opt/local-path-provisioner/pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:40 UTC | 12 Sep 24 21:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | -p addons-694635                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | addons-694635                                                                               |                      |         |         |                     |                     |
	| ip      | addons-694635 ip                                                                            | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-694635 ssh curl -s                                                                   | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-694635 ip                                                                            | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-694635 addons disable                                                                | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:43 UTC | 12 Sep 24 21:43 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-694635 addons                                                                        | addons-694635        | jenkins | v1.34.0 | 12 Sep 24 21:46 UTC | 12 Sep 24 21:46 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:47.475866   13842 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:47.475993   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476005   13842 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:47.476012   13842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:47.476186   13842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:29:47.476836   13842 out.go:352] Setting JSON to false
	I0912 21:29:47.477752   13842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":729,"bootTime":1726175858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:47.477818   13842 start.go:139] virtualization: kvm guest
	I0912 21:29:47.479869   13842 out.go:177] * [addons-694635] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:47.481136   13842 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:29:47.481139   13842 notify.go:220] Checking for updates...
	I0912 21:29:47.483542   13842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:47.484839   13842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:29:47.486133   13842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.487896   13842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:29:47.489241   13842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:29:47.490764   13842 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:47.523002   13842 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:29:47.524034   13842 start.go:297] selected driver: kvm2
	I0912 21:29:47.524046   13842 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:29:47.524060   13842 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:29:47.524980   13842 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.525102   13842 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:29:47.540324   13842 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:29:47.540407   13842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:47.540684   13842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:29:47.540767   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:29:47.540781   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:29:47.540792   13842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:47.540869   13842 start.go:340] cluster config:
	{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:47.540994   13842 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:47.542738   13842 out.go:177] * Starting "addons-694635" primary control-plane node in "addons-694635" cluster
	I0912 21:29:47.543940   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:29:47.543977   13842 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:47.543985   13842 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:47.544089   13842 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:29:47.544102   13842 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:29:47.544526   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:29:47.544557   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json: {Name:mk33fa1e209cbe67cd91a1b792a3ca9ac0ed48ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:47.544694   13842 start.go:360] acquireMachinesLock for addons-694635: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:29:47.544742   13842 start.go:364] duration metric: took 34.718µs to acquireMachinesLock for "addons-694635"
	I0912 21:29:47.544765   13842 start.go:93] Provisioning new machine with config: &{Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:29:47.544840   13842 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:29:47.546289   13842 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 21:29:47.546444   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:29:47.546482   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:29:47.560635   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I0912 21:29:47.561053   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:29:47.561645   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:29:47.561668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:29:47.562020   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:29:47.562207   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:29:47.562346   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:29:47.562487   13842 start.go:159] libmachine.API.Create for "addons-694635" (driver="kvm2")
	I0912 21:29:47.562506   13842 client.go:168] LocalClient.Create starting
	I0912 21:29:47.562537   13842 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:29:47.644946   13842 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:29:47.782363   13842 main.go:141] libmachine: Running pre-create checks...
	I0912 21:29:47.782383   13842 main.go:141] libmachine: (addons-694635) Calling .PreCreateCheck
	I0912 21:29:47.782856   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:29:47.783275   13842 main.go:141] libmachine: Creating machine...
	I0912 21:29:47.783290   13842 main.go:141] libmachine: (addons-694635) Calling .Create
	I0912 21:29:47.783442   13842 main.go:141] libmachine: (addons-694635) Creating KVM machine...
	I0912 21:29:47.784608   13842 main.go:141] libmachine: (addons-694635) DBG | found existing default KVM network
	I0912 21:29:47.785304   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.785155   13864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0912 21:29:47.785337   13842 main.go:141] libmachine: (addons-694635) DBG | created network xml: 
	I0912 21:29:47.785348   13842 main.go:141] libmachine: (addons-694635) DBG | <network>
	I0912 21:29:47.785361   13842 main.go:141] libmachine: (addons-694635) DBG |   <name>mk-addons-694635</name>
	I0912 21:29:47.785392   13842 main.go:141] libmachine: (addons-694635) DBG |   <dns enable='no'/>
	I0912 21:29:47.785413   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785428   13842 main.go:141] libmachine: (addons-694635) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:29:47.785441   13842 main.go:141] libmachine: (addons-694635) DBG |     <dhcp>
	I0912 21:29:47.785456   13842 main.go:141] libmachine: (addons-694635) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:29:47.785466   13842 main.go:141] libmachine: (addons-694635) DBG |     </dhcp>
	I0912 21:29:47.785476   13842 main.go:141] libmachine: (addons-694635) DBG |   </ip>
	I0912 21:29:47.785490   13842 main.go:141] libmachine: (addons-694635) DBG |   
	I0912 21:29:47.785501   13842 main.go:141] libmachine: (addons-694635) DBG | </network>
	I0912 21:29:47.785509   13842 main.go:141] libmachine: (addons-694635) DBG | 
	I0912 21:29:47.790883   13842 main.go:141] libmachine: (addons-694635) DBG | trying to create private KVM network mk-addons-694635 192.168.39.0/24...
	I0912 21:29:47.856566   13842 main.go:141] libmachine: (addons-694635) DBG | private KVM network mk-addons-694635 192.168.39.0/24 created
	I0912 21:29:47.856589   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:47.856546   13864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:47.856604   13842 main.go:141] libmachine: (addons-694635) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:47.856615   13842 main.go:141] libmachine: (addons-694635) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:29:47.856703   13842 main.go:141] libmachine: (addons-694635) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:29:48.103210   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.103069   13864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa...
	I0912 21:29:48.158267   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158115   13864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk...
	I0912 21:29:48.158303   13842 main.go:141] libmachine: (addons-694635) DBG | Writing magic tar header
	I0912 21:29:48.158321   13842 main.go:141] libmachine: (addons-694635) DBG | Writing SSH key tar header
	I0912 21:29:48.158334   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:48.158221   13864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 ...
	I0912 21:29:48.158344   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635
	I0912 21:29:48.158353   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:29:48.158362   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635 (perms=drwx------)
	I0912 21:29:48.158376   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:29:48.158397   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:29:48.158411   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:48.158423   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:29:48.158433   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:29:48.158450   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:29:48.158464   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:29:48.158476   13842 main.go:141] libmachine: (addons-694635) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:29:48.158486   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:29:48.158502   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:48.158514   13842 main.go:141] libmachine: (addons-694635) DBG | Checking permissions on dir: /home
	I0912 21:29:48.158532   13842 main.go:141] libmachine: (addons-694635) DBG | Skipping /home - not owner
	I0912 21:29:48.159530   13842 main.go:141] libmachine: (addons-694635) define libvirt domain using xml: 
	I0912 21:29:48.159561   13842 main.go:141] libmachine: (addons-694635) <domain type='kvm'>
	I0912 21:29:48.159569   13842 main.go:141] libmachine: (addons-694635)   <name>addons-694635</name>
	I0912 21:29:48.159576   13842 main.go:141] libmachine: (addons-694635)   <memory unit='MiB'>4000</memory>
	I0912 21:29:48.159582   13842 main.go:141] libmachine: (addons-694635)   <vcpu>2</vcpu>
	I0912 21:29:48.159593   13842 main.go:141] libmachine: (addons-694635)   <features>
	I0912 21:29:48.159601   13842 main.go:141] libmachine: (addons-694635)     <acpi/>
	I0912 21:29:48.159611   13842 main.go:141] libmachine: (addons-694635)     <apic/>
	I0912 21:29:48.159621   13842 main.go:141] libmachine: (addons-694635)     <pae/>
	I0912 21:29:48.159629   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.159634   13842 main.go:141] libmachine: (addons-694635)   </features>
	I0912 21:29:48.159641   13842 main.go:141] libmachine: (addons-694635)   <cpu mode='host-passthrough'>
	I0912 21:29:48.159688   13842 main.go:141] libmachine: (addons-694635)   
	I0912 21:29:48.159713   13842 main.go:141] libmachine: (addons-694635)   </cpu>
	I0912 21:29:48.159737   13842 main.go:141] libmachine: (addons-694635)   <os>
	I0912 21:29:48.159750   13842 main.go:141] libmachine: (addons-694635)     <type>hvm</type>
	I0912 21:29:48.159770   13842 main.go:141] libmachine: (addons-694635)     <boot dev='cdrom'/>
	I0912 21:29:48.159783   13842 main.go:141] libmachine: (addons-694635)     <boot dev='hd'/>
	I0912 21:29:48.159802   13842 main.go:141] libmachine: (addons-694635)     <bootmenu enable='no'/>
	I0912 21:29:48.159818   13842 main.go:141] libmachine: (addons-694635)   </os>
	I0912 21:29:48.159831   13842 main.go:141] libmachine: (addons-694635)   <devices>
	I0912 21:29:48.159842   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='cdrom'>
	I0912 21:29:48.159866   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/boot2docker.iso'/>
	I0912 21:29:48.159877   13842 main.go:141] libmachine: (addons-694635)       <target dev='hdc' bus='scsi'/>
	I0912 21:29:48.159885   13842 main.go:141] libmachine: (addons-694635)       <readonly/>
	I0912 21:29:48.159896   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159907   13842 main.go:141] libmachine: (addons-694635)     <disk type='file' device='disk'>
	I0912 21:29:48.159916   13842 main.go:141] libmachine: (addons-694635)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:29:48.159932   13842 main.go:141] libmachine: (addons-694635)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/addons-694635.rawdisk'/>
	I0912 21:29:48.159943   13842 main.go:141] libmachine: (addons-694635)       <target dev='hda' bus='virtio'/>
	I0912 21:29:48.159953   13842 main.go:141] libmachine: (addons-694635)     </disk>
	I0912 21:29:48.159969   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.159982   13842 main.go:141] libmachine: (addons-694635)       <source network='mk-addons-694635'/>
	I0912 21:29:48.159992   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160001   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160011   13842 main.go:141] libmachine: (addons-694635)     <interface type='network'>
	I0912 21:29:48.160022   13842 main.go:141] libmachine: (addons-694635)       <source network='default'/>
	I0912 21:29:48.160032   13842 main.go:141] libmachine: (addons-694635)       <model type='virtio'/>
	I0912 21:29:48.160043   13842 main.go:141] libmachine: (addons-694635)     </interface>
	I0912 21:29:48.160051   13842 main.go:141] libmachine: (addons-694635)     <serial type='pty'>
	I0912 21:29:48.160066   13842 main.go:141] libmachine: (addons-694635)       <target port='0'/>
	I0912 21:29:48.160077   13842 main.go:141] libmachine: (addons-694635)     </serial>
	I0912 21:29:48.160089   13842 main.go:141] libmachine: (addons-694635)     <console type='pty'>
	I0912 21:29:48.160108   13842 main.go:141] libmachine: (addons-694635)       <target type='serial' port='0'/>
	I0912 21:29:48.160121   13842 main.go:141] libmachine: (addons-694635)     </console>
	I0912 21:29:48.160132   13842 main.go:141] libmachine: (addons-694635)     <rng model='virtio'>
	I0912 21:29:48.160143   13842 main.go:141] libmachine: (addons-694635)       <backend model='random'>/dev/random</backend>
	I0912 21:29:48.160151   13842 main.go:141] libmachine: (addons-694635)     </rng>
	I0912 21:29:48.160157   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160168   13842 main.go:141] libmachine: (addons-694635)     
	I0912 21:29:48.160176   13842 main.go:141] libmachine: (addons-694635)   </devices>
	I0912 21:29:48.160185   13842 main.go:141] libmachine: (addons-694635) </domain>
	I0912 21:29:48.160195   13842 main.go:141] libmachine: (addons-694635) 
	I0912 21:29:48.165998   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:32:e5:de in network default
	I0912 21:29:48.166596   13842 main.go:141] libmachine: (addons-694635) Ensuring networks are active...
	I0912 21:29:48.166616   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:48.167233   13842 main.go:141] libmachine: (addons-694635) Ensuring network default is active
	I0912 21:29:48.167509   13842 main.go:141] libmachine: (addons-694635) Ensuring network mk-addons-694635 is active
	I0912 21:29:48.167964   13842 main.go:141] libmachine: (addons-694635) Getting domain xml...
	I0912 21:29:48.168724   13842 main.go:141] libmachine: (addons-694635) Creating domain...
	I0912 21:29:49.564332   13842 main.go:141] libmachine: (addons-694635) Waiting to get IP...
	I0912 21:29:49.565210   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.565680   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.565753   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.565686   13864 retry.go:31] will retry after 259.088458ms: waiting for machine to come up
	I0912 21:29:49.826131   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:49.826631   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:49.826660   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:49.826579   13864 retry.go:31] will retry after 330.128851ms: waiting for machine to come up
	I0912 21:29:50.158148   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.158574   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.158644   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.158552   13864 retry.go:31] will retry after 438.081447ms: waiting for machine to come up
	I0912 21:29:50.598323   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:50.598829   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:50.598897   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:50.598822   13864 retry.go:31] will retry after 407.106138ms: waiting for machine to come up
	I0912 21:29:51.007259   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.007718   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.007758   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.007668   13864 retry.go:31] will retry after 621.06803ms: waiting for machine to come up
	I0912 21:29:51.630684   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:51.631143   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:51.631165   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:51.631112   13864 retry.go:31] will retry after 606.154083ms: waiting for machine to come up
	I0912 21:29:52.238827   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:52.239319   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:52.239351   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:52.239251   13864 retry.go:31] will retry after 1.053486982s: waiting for machine to come up
	I0912 21:29:53.294067   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:53.294469   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:53.294496   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:53.294420   13864 retry.go:31] will retry after 1.050950177s: waiting for machine to come up
	I0912 21:29:54.347197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:54.347603   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:54.347631   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:54.347539   13864 retry.go:31] will retry after 1.24941056s: waiting for machine to come up
	I0912 21:29:55.598907   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:55.599382   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:55.599413   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:55.599328   13864 retry.go:31] will retry after 2.237205326s: waiting for machine to come up
	I0912 21:29:57.838937   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:57.839483   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:57.839506   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:57.839455   13864 retry.go:31] will retry after 2.152344085s: waiting for machine to come up
	I0912 21:29:59.994815   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:29:59.995133   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:29:59.995155   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:29:59.995091   13864 retry.go:31] will retry after 2.540765126s: waiting for machine to come up
	I0912 21:30:02.536979   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:02.537427   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:02.537453   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:02.537360   13864 retry.go:31] will retry after 3.772056123s: waiting for machine to come up
	I0912 21:30:06.313642   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:06.314016   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find current IP address of domain addons-694635 in network mk-addons-694635
	I0912 21:30:06.314033   13842 main.go:141] libmachine: (addons-694635) DBG | I0912 21:30:06.313980   13864 retry.go:31] will retry after 4.542886768s: waiting for machine to come up
	I0912 21:30:10.861222   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861712   13842 main.go:141] libmachine: (addons-694635) Found IP for machine: 192.168.39.67
	I0912 21:30:10.861742   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has current primary IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.861751   13842 main.go:141] libmachine: (addons-694635) Reserving static IP address...
	I0912 21:30:10.862048   13842 main.go:141] libmachine: (addons-694635) DBG | unable to find host DHCP lease matching {name: "addons-694635", mac: "52:54:00:6b:43:77", ip: "192.168.39.67"} in network mk-addons-694635
	I0912 21:30:10.932572   13842 main.go:141] libmachine: (addons-694635) Reserved static IP address: 192.168.39.67
	I0912 21:30:10.932602   13842 main.go:141] libmachine: (addons-694635) Waiting for SSH to be available...
	I0912 21:30:10.932612   13842 main.go:141] libmachine: (addons-694635) DBG | Getting to WaitForSSH function...
	I0912 21:30:10.935290   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935838   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:10.935873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:10.935964   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH client type: external
	I0912 21:30:10.935991   13842 main.go:141] libmachine: (addons-694635) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa (-rw-------)
	I0912 21:30:10.936035   13842 main.go:141] libmachine: (addons-694635) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:30:10.936049   13842 main.go:141] libmachine: (addons-694635) DBG | About to run SSH command:
	I0912 21:30:10.936084   13842 main.go:141] libmachine: (addons-694635) DBG | exit 0
	I0912 21:30:11.069676   13842 main.go:141] libmachine: (addons-694635) DBG | SSH cmd err, output: <nil>: 
	I0912 21:30:11.070005   13842 main.go:141] libmachine: (addons-694635) KVM machine creation complete!
	I0912 21:30:11.070347   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:11.070852   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071054   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:11.071193   13842 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:30:11.071208   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:11.072333   13842 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:30:11.072351   13842 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:30:11.072359   13842 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:30:11.072367   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.074613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.074932   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.074958   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.075073   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.075372   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075564   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.075731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.075904   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.076074   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.076085   13842 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:30:11.184974   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.184996   13842 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:30:11.185003   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.187718   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188031   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.188060   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.188249   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.188446   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.188694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.188821   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.188967   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.188978   13842 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:30:11.297959   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:30:11.298022   13842 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:30:11.298032   13842 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:30:11.298042   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298318   13842 buildroot.go:166] provisioning hostname "addons-694635"
	I0912 21:30:11.298346   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.298514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.301198   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301546   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.301584   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.301725   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.301923   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.302369   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.302563   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.302737   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.302753   13842 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-694635 && echo "addons-694635" | sudo tee /etc/hostname
	I0912 21:30:11.426945   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694635
	
	I0912 21:30:11.426972   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.429942   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.430333   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.430492   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.430677   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430844   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.430998   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.431169   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:11.431330   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:11.431345   13842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-694635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-694635/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-694635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:30:11.549812   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:30:11.549842   13842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:30:11.549859   13842 buildroot.go:174] setting up certificates
	I0912 21:30:11.549868   13842 provision.go:84] configureAuth start
	I0912 21:30:11.549876   13842 main.go:141] libmachine: (addons-694635) Calling .GetMachineName
	I0912 21:30:11.550203   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:11.552873   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553191   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.553219   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.553451   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.555633   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.555953   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.555985   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.556111   13842 provision.go:143] copyHostCerts
	I0912 21:30:11.556205   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:30:11.556362   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:30:11.556467   13842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:30:11.556548   13842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.addons-694635 san=[127.0.0.1 192.168.39.67 addons-694635 localhost minikube]
	I0912 21:30:11.859350   13842 provision.go:177] copyRemoteCerts
	I0912 21:30:11.859407   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:30:11.859439   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:11.862041   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862347   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:11.862395   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:11.862533   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:11.862736   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:11.862883   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:11.863033   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:11.947343   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:30:11.971801   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:30:11.994695   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:30:12.016706   13842 provision.go:87] duration metric: took 466.828028ms to configureAuth
	I0912 21:30:12.016730   13842 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:30:12.016881   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:12.016945   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.019830   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020115   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.020139   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.020268   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.020572   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020764   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.020928   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.021133   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.021291   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.021305   13842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:30:12.242709   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:30:12.242730   13842 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:30:12.242738   13842 main.go:141] libmachine: (addons-694635) Calling .GetURL
	I0912 21:30:12.243884   13842 main.go:141] libmachine: (addons-694635) DBG | Using libvirt version 6000000
	I0912 21:30:12.245945   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246318   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.246350   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.246533   13842 main.go:141] libmachine: Docker is up and running!
	I0912 21:30:12.246556   13842 main.go:141] libmachine: Reticulating splines...
	I0912 21:30:12.246564   13842 client.go:171] duration metric: took 24.684052058s to LocalClient.Create
	I0912 21:30:12.246588   13842 start.go:167] duration metric: took 24.684100435s to libmachine.API.Create "addons-694635"
	I0912 21:30:12.246601   13842 start.go:293] postStartSetup for "addons-694635" (driver="kvm2")
	I0912 21:30:12.246615   13842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:30:12.246639   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.246870   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:30:12.246905   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.249197   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249498   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.249534   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.249694   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.249879   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.250020   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.250162   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.335312   13842 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:30:12.339024   13842 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:30:12.339044   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:30:12.339112   13842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:30:12.339135   13842 start.go:296] duration metric: took 92.526012ms for postStartSetup
	I0912 21:30:12.339176   13842 main.go:141] libmachine: (addons-694635) Calling .GetConfigRaw
	I0912 21:30:12.339703   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.342217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342565   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.342593   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.342850   13842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/config.json ...
	I0912 21:30:12.343012   13842 start.go:128] duration metric: took 24.798163033s to createHost
	I0912 21:30:12.343032   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.345464   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345807   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.345844   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.345954   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.346123   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346247   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.346385   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.346509   13842 main.go:141] libmachine: Using SSH client type: native
	I0912 21:30:12.346686   13842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0912 21:30:12.346697   13842 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:30:12.457929   13842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726176612.428880125
	
	I0912 21:30:12.457953   13842 fix.go:216] guest clock: 1726176612.428880125
	I0912 21:30:12.457962   13842 fix.go:229] Guest: 2024-09-12 21:30:12.428880125 +0000 UTC Remote: 2024-09-12 21:30:12.34302243 +0000 UTC m=+24.902400367 (delta=85.857695ms)
	I0912 21:30:12.458006   13842 fix.go:200] guest clock delta is within tolerance: 85.857695ms
	I0912 21:30:12.458017   13842 start.go:83] releasing machines lock for "addons-694635", held for 24.913263111s
	I0912 21:30:12.458045   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.458281   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:12.460843   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461195   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.461214   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.461345   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461780   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.461924   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:12.462008   13842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:30:12.462054   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.462099   13842 ssh_runner.go:195] Run: cat /version.json
	I0912 21:30:12.462122   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:12.465318   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466089   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466118   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466258   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466291   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466484   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.466652   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.466686   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:12.466711   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:12.466774   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.466851   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:12.466973   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:12.467142   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:12.467278   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:12.577120   13842 ssh_runner.go:195] Run: systemctl --version
	I0912 21:30:12.582974   13842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:30:12.745818   13842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:30:12.751421   13842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:30:12.751490   13842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:30:12.767475   13842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:30:12.767505   13842 start.go:495] detecting cgroup driver to use...
	I0912 21:30:12.767618   13842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:30:12.783679   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:30:12.797513   13842 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:30:12.797586   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:30:12.810747   13842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:30:12.824037   13842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:30:12.933703   13842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:30:13.069024   13842 docker.go:233] disabling docker service ...
	I0912 21:30:13.069119   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:30:13.082671   13842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:30:13.095050   13842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:30:13.233647   13842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:30:13.370107   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:30:13.383851   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:30:13.402794   13842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:30:13.402859   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.413117   13842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:30:13.413207   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.424050   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.434819   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.446105   13842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:30:13.457702   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.468902   13842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.486556   13842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:30:13.496994   13842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:30:13.506290   13842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:30:13.506366   13842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:30:13.518440   13842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:30:13.528117   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:13.648177   13842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:30:13.743367   13842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:30:13.743454   13842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:30:13.747977   13842 start.go:563] Will wait 60s for crictl version
	I0912 21:30:13.748061   13842 ssh_runner.go:195] Run: which crictl
	I0912 21:30:13.751466   13842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:30:13.795727   13842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:30:13.795864   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.823080   13842 ssh_runner.go:195] Run: crio --version
	I0912 21:30:13.851860   13842 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:30:13.853473   13842 main.go:141] libmachine: (addons-694635) Calling .GetIP
	I0912 21:30:13.855932   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856224   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:13.856252   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:13.856515   13842 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:30:13.860421   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:13.872141   13842 kubeadm.go:883] updating cluster {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:30:13.872251   13842 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:30:13.872300   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:13.904455   13842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:30:13.904513   13842 ssh_runner.go:195] Run: which lz4
	I0912 21:30:13.908020   13842 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:30:13.912184   13842 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:30:13.912211   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 21:30:15.114051   13842 crio.go:462] duration metric: took 1.206056393s to copy over tarball
	I0912 21:30:15.114132   13842 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:30:17.173858   13842 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059695045s)
	I0912 21:30:17.173886   13842 crio.go:469] duration metric: took 2.059804143s to extract the tarball
	I0912 21:30:17.173896   13842 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:30:17.209405   13842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:30:17.248658   13842 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 21:30:17.248678   13842 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:30:17.248685   13842 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.1 crio true true} ...
	I0912 21:30:17.248808   13842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-694635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:30:17.248877   13842 ssh_runner.go:195] Run: crio config
	I0912 21:30:17.290568   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:17.290590   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:17.290601   13842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:30:17.290621   13842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-694635 NodeName:addons-694635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:30:17.290786   13842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-694635"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:30:17.290849   13842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:30:17.300055   13842 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:30:17.300152   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:30:17.308986   13842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0912 21:30:17.325445   13842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:30:17.340762   13842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0912 21:30:17.356821   13842 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0912 21:30:17.360484   13842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:30:17.371412   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:17.492721   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:17.509813   13842 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635 for IP: 192.168.39.67
	I0912 21:30:17.509838   13842 certs.go:194] generating shared ca certs ...
	I0912 21:30:17.509857   13842 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.510001   13842 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:30:17.588276   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt ...
	I0912 21:30:17.588302   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt: {Name:mk816935852d33e60449d1c6a4d94ec7ab82ac30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588455   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key ...
	I0912 21:30:17.588466   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key: {Name:mk9dc9de662fbb5903c290d7926fa7232953ae33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.588536   13842 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:30:17.693721   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt ...
	I0912 21:30:17.693751   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt: {Name:mk3263e222fdf8339a04083239eee50b749554b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693895   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key ...
	I0912 21:30:17.693905   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key: {Name:mk05f7726618d659b90a4327bb74fa26385a63bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:17.693978   13842 certs.go:256] generating profile certs ...
	I0912 21:30:17.694024   13842 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key
	I0912 21:30:17.694037   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt with IP's: []
	I0912 21:30:18.018134   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt ...
	I0912 21:30:18.018169   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: {Name:mk10ce384e125f2b7ec307089833f9de35a73420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018339   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key ...
	I0912 21:30:18.018350   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.key: {Name:mk451874420166276937e43f0b93cd8fbad875f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.018420   13842 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54
	I0912 21:30:18.018438   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67]
	I0912 21:30:18.261062   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 ...
	I0912 21:30:18.261090   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54: {Name:mkd62b1b67056d42a6c142ee6c71845182d8908d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261238   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 ...
	I0912 21:30:18.261252   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54: {Name:mk7c82ddc89e4a1cf8c648222b96704d6a1d1dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.261330   13842 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt
	I0912 21:30:18.261402   13842 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key.0d5d0e54 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key
	I0912 21:30:18.261446   13842 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key
	I0912 21:30:18.261463   13842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt with IP's: []
	I0912 21:30:18.451474   13842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt ...
	I0912 21:30:18.451506   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt: {Name:mk0f640d1553a36669ab6e6b7b695492f179b963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451692   13842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key ...
	I0912 21:30:18.451707   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key: {Name:mk18108f1bab56e6e4bd321dfe7a25d4858d7cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:18.451898   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:30:18.451934   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:30:18.451961   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:30:18.451983   13842 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:30:18.452546   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:30:18.477574   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:30:18.499725   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:30:18.521000   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:30:18.542359   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:30:18.563704   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:30:18.585274   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:30:18.606928   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:30:18.629281   13842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:30:18.650974   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:30:18.666875   13842 ssh_runner.go:195] Run: openssl version
	I0912 21:30:18.672260   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:30:18.682723   13842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.686978   13842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.687042   13842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:30:18.692565   13842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:30:18.702818   13842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:30:18.706358   13842 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:30:18.706403   13842 kubeadm.go:392] StartCluster: {Name:addons-694635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-694635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:30:18.706469   13842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:30:18.706505   13842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:30:18.740797   13842 cri.go:89] found id: ""
	I0912 21:30:18.740875   13842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:30:18.750323   13842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:30:18.760198   13842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:30:18.771699   13842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:30:18.771722   13842 kubeadm.go:157] found existing configuration files:
	
	I0912 21:30:18.771768   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:30:18.780639   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:30:18.780710   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:30:18.790136   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:30:18.798881   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:30:18.798933   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:30:18.807668   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.815937   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:30:18.815991   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:30:18.824796   13842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:30:18.833290   13842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:30:18.833349   13842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:30:18.842109   13842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:30:18.894082   13842 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:30:18.894163   13842 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:30:18.987148   13842 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:30:18.987303   13842 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:30:18.987452   13842 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:30:18.997399   13842 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:30:19.070004   13842 out.go:235]   - Generating certificates and keys ...
	I0912 21:30:19.070107   13842 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:30:19.070229   13842 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:30:19.148000   13842 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:30:19.614691   13842 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:30:19.901914   13842 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:30:19.979789   13842 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:30:20.166978   13842 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:30:20.167130   13842 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.264957   13842 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:30:20.265097   13842 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-694635 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0912 21:30:20.466176   13842 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:30:20.696253   13842 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:30:20.807177   13842 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:30:20.807284   13842 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:30:20.974731   13842 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:30:21.105184   13842 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:30:21.174341   13842 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:30:21.244405   13842 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:30:21.769255   13842 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:30:21.769831   13842 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:30:21.772293   13842 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:30:21.774278   13842 out.go:235]   - Booting up control plane ...
	I0912 21:30:21.774387   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:30:21.774523   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:30:21.774628   13842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:30:21.791849   13842 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:30:21.798525   13842 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:30:21.798599   13842 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:30:21.939016   13842 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:30:21.939132   13842 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:30:22.439761   13842 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.995176ms
	I0912 21:30:22.439860   13842 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:30:27.939433   13842 kubeadm.go:310] [api-check] The API server is healthy after 5.502232123s
	I0912 21:30:27.957923   13842 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:30:27.974582   13842 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:30:28.004043   13842 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:30:28.004250   13842 kubeadm.go:310] [mark-control-plane] Marking the node addons-694635 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:30:28.022686   13842 kubeadm.go:310] [bootstrap-token] Using token: v7rbq6.ajeibt3p6xzx9rx5
	I0912 21:30:28.024134   13842 out.go:235]   - Configuring RBAC rules ...
	I0912 21:30:28.024266   13842 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:30:28.029565   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:30:28.040289   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:30:28.043786   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:30:28.047040   13842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:30:28.051390   13842 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:30:28.352753   13842 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:30:28.795025   13842 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:30:29.351438   13842 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:30:29.352611   13842 kubeadm.go:310] 
	I0912 21:30:29.352681   13842 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:30:29.352688   13842 kubeadm.go:310] 
	I0912 21:30:29.352768   13842 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:30:29.352777   13842 kubeadm.go:310] 
	I0912 21:30:29.352807   13842 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:30:29.352905   13842 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:30:29.352995   13842 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:30:29.353009   13842 kubeadm.go:310] 
	I0912 21:30:29.353111   13842 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:30:29.353127   13842 kubeadm.go:310] 
	I0912 21:30:29.353199   13842 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:30:29.353208   13842 kubeadm.go:310] 
	I0912 21:30:29.353287   13842 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:30:29.353390   13842 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:30:29.353500   13842 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:30:29.353511   13842 kubeadm.go:310] 
	I0912 21:30:29.353631   13842 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:30:29.353759   13842 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:30:29.353776   13842 kubeadm.go:310] 
	I0912 21:30:29.353851   13842 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.353941   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 21:30:29.353960   13842 kubeadm.go:310] 	--control-plane 
	I0912 21:30:29.353966   13842 kubeadm.go:310] 
	I0912 21:30:29.354039   13842 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:30:29.354045   13842 kubeadm.go:310] 
	I0912 21:30:29.354116   13842 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v7rbq6.ajeibt3p6xzx9rx5 \
	I0912 21:30:29.354200   13842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 21:30:29.355833   13842 kubeadm.go:310] W0912 21:30:18.865667     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356162   13842 kubeadm.go:310] W0912 21:30:18.867599     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:30:29.356254   13842 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:30:29.356325   13842 cni.go:84] Creating CNI manager for ""
	I0912 21:30:29.356345   13842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:30:29.358563   13842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:30:29.360118   13842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:30:29.371250   13842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:30:29.390372   13842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:30:29.390461   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-694635 minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-694635 minikube.k8s.io/primary=true
	I0912 21:30:29.390464   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538333   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:29.538368   13842 ops.go:34] apiserver oom_adj: -16
	I0912 21:30:30.038483   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:30.539293   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.039133   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:31.538947   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.038423   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:32.539286   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.039390   13842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:33.127054   13842 kubeadm.go:1113] duration metric: took 3.736657835s to wait for elevateKubeSystemPrivileges
	I0912 21:30:33.127093   13842 kubeadm.go:394] duration metric: took 14.420693245s to StartCluster
	I0912 21:30:33.127114   13842 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127242   13842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:30:33.127605   13842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:33.127771   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:30:33.127785   13842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:30:33.127850   13842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:30:33.127956   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.127969   13842 addons.go:69] Setting ingress-dns=true in profile "addons-694635"
	I0912 21:30:33.127972   13842 addons.go:69] Setting cloud-spanner=true in profile "addons-694635"
	I0912 21:30:33.127991   13842 addons.go:69] Setting registry=true in profile "addons-694635"
	I0912 21:30:33.127957   13842 addons.go:69] Setting yakd=true in profile "addons-694635"
	I0912 21:30:33.128001   13842 addons.go:234] Setting addon cloud-spanner=true in "addons-694635"
	I0912 21:30:33.128012   13842 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-694635"
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon registry=true in "addons-694635"
	I0912 21:30:33.128027   13842 addons.go:69] Setting metrics-server=true in profile "addons-694635"
	I0912 21:30:33.128032   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-694635"
	I0912 21:30:33.128043   13842 addons.go:234] Setting addon metrics-server=true in "addons-694635"
	I0912 21:30:33.128047   13842 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-694635"
	I0912 21:30:33.128049   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128060   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128080   13842 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:33.128102   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128386   13842 addons.go:69] Setting volcano=true in profile "addons-694635"
	I0912 21:30:33.128420   13842 addons.go:234] Setting addon volcano=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128450   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128451   13842 addons.go:69] Setting inspektor-gadget=true in profile "addons-694635"
	I0912 21:30:33.128460   13842 addons.go:69] Setting volumesnapshots=true in profile "addons-694635"
	I0912 21:30:33.128476   13842 addons.go:234] Setting addon inspektor-gadget=true in "addons-694635"
	I0912 21:30:33.128484   13842 addons.go:69] Setting default-storageclass=true in profile "addons-694635"
	I0912 21:30:33.128494   13842 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-694635"
	I0912 21:30:33.128503   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128515   13842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-694635"
	I0912 21:30:33.128542   13842 addons.go:234] Setting addon volumesnapshots=true in "addons-694635"
	I0912 21:30:33.128571   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128475   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128659   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128809   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128816   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128833   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128021   13842 addons.go:234] Setting addon yakd=true in "addons-694635"
	I0912 21:30:33.128846   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128867   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128882   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128911   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128927   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.128945   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128043   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.128516   13842 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-694635"
	I0912 21:30:33.128441   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129006   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128004   13842 addons.go:69] Setting storage-provisioner=true in profile "addons-694635"
	I0912 21:30:33.129193   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129197   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129236   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.127996   13842 addons.go:234] Setting addon ingress-dns=true in "addons-694635"
	I0912 21:30:33.129298   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.129535   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129586   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128481   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129193   13842 addons.go:234] Setting addon storage-provisioner=true in "addons-694635"
	I0912 21:30:33.128535   13842 addons.go:69] Setting gcp-auth=true in profile "addons-694635"
	I0912 21:30:33.129722   13842 mustload.go:65] Loading cluster: addons-694635
	I0912 21:30:33.129728   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.127957   13842 addons.go:69] Setting ingress=true in profile "addons-694635"
	I0912 21:30:33.129751   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129763   13842 addons.go:234] Setting addon ingress=true in "addons-694635"
	I0912 21:30:33.128448   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129798   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.129304   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.129900   13842 config.go:182] Loaded profile config "addons-694635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:30:33.129910   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.128543   13842 addons.go:69] Setting helm-tiller=true in profile "addons-694635"
	I0912 21:30:33.129963   13842 addons.go:234] Setting addon helm-tiller=true in "addons-694635"
	I0912 21:30:33.130031   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130100   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.130255   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130287   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130407   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.130440   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.130535   13842 out.go:177] * Verifying Kubernetes components...
	I0912 21:30:33.130801   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.141968   13842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:30:33.150069   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0912 21:30:33.150316   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0912 21:30:33.150409   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0912 21:30:33.150573   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0912 21:30:33.150789   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150884   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.150941   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43391
	I0912 21:30:33.151478   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.151657   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151668   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151789   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151800   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151919   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.151928   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.151977   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152027   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.152074   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152112   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152642   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.152664   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.152720   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.152818   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152827   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.152948   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.152958   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.153389   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0912 21:30:33.153693   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.153966   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0912 21:30:33.157880   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.157948   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.158145   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158164   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158243   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158260   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158318   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158329   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158341   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.158598   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.158814   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.158844   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.158917   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.158980   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159098   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.159117   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.159471   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.159522   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.159600   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.160143   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.160171   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.160628   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.163174   13842 addons.go:234] Setting addon default-storageclass=true in "addons-694635"
	I0912 21:30:33.163237   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.163679   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.163717   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.164514   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.164547   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.186987   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0912 21:30:33.187677   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.188318   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.188338   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.188699   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.188886   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.189751   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0912 21:30:33.190453   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.191030   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.191046   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.192477   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0912 21:30:33.192988   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.193332   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0912 21:30:33.193964   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.194014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.194400   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.194427   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.194717   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194732   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.194867   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.194878   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.195204   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195262   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.195317   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.195365   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.196144   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.196183   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.196926   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.197418   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:30:33.198461   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:33.198474   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:30:33.198481   13842 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:30:33.198514   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.199826   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0912 21:30:33.200469   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.200723   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:30:33.201099   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.201116   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.201423   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.201605   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.202354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203063   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.203235   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.203301   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.203325   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.203365   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:30:33.203436   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.203701   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.204148   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.204529   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.204565   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.205663   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:30:33.206838   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:30:33.208115   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:30:33.209260   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:30:33.210410   13842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:30:33.211388   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:33.211406   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:30:33.211431   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.213932   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0912 21:30:33.214509   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.215055   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.215079   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.215339   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.215471   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.215750   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.215812   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.215831   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.216070   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.216227   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.216391   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.216522   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.218588   13842 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-694635"
	I0912 21:30:33.218632   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:33.218984   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.219020   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.219207   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0912 21:30:33.219636   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.220056   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.220076   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.220402   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.220894   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.220934   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.221132   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0912 21:30:33.222065   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.222569   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.222585   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.222956   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.223007   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0912 21:30:33.223665   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.223702   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.226781   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0912 21:30:33.227303   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.227791   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.227810   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.228143   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.228324   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.230191   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.232445   13842 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:30:33.233487   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.233677   13842 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.233695   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:30:33.233715   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.236503   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.236518   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.236794   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0912 21:30:33.237127   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237492   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.237525   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.237561   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.237731   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.238172   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.238205   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.238515   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.238691   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.238755   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0912 21:30:33.239058   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.239118   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239258   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0912 21:30:33.239484   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0912 21:30:33.239603   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.239735   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.239754   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.239756   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240141   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240160   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240222   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.240292   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240315   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.240706   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.240791   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240952   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.240954   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.240967   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.241651   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.241936   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.242439   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0912 21:30:33.242626   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.243111   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.243235   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.244232   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.244741   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0912 21:30:33.244824   13842 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:30:33.245133   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245135   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.245276   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.245293   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.245549   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.245632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246062   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.246078   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.246574   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.246602   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.247038   13842 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:30:33.247107   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:33.247118   13842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:30:33.247136   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.247367   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.247571   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0912 21:30:33.248105   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.248613   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:33.248629   13842 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:30:33.248646   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.248652   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.248667   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.249005   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.249581   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249722   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0912 21:30:33.249729   13842 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:30:33.249843   13842 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:30:33.249905   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.249947   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.249984   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.250358   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.250824   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.250839   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.250973   13842 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:33.250992   13842 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:30:33.251013   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.251167   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.251211   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251334   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.251681   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.251704   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.251870   13842 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.251886   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:30:33.251904   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.252556   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.252912   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.253090   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.253335   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.253982   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.254189   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254334   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.254706   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.254745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.254755   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:33.254764   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:33.254772   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:33.255212   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:33.255240   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:33.255249   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	W0912 21:30:33.255329   13842 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0912 21:30:33.256835   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257248   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257354   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257768   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257790   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257818   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257834   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.257862   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.257877   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.258042   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258312   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258360   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.258364   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258463   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258613   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.258645   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258693   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.258799   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.258878   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.259401   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.261562   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0912 21:30:33.261628   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0912 21:30:33.261740   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0912 21:30:33.262014   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262042   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262120   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.262468   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262486   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262561   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262586   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.262968   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.262988   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.262990   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.263127   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263521   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263555   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263697   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:33.263722   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:33.263750   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.263947   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.268234   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.268300   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0912 21:30:33.268599   13842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.268615   13842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:30:33.268635   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.268729   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0912 21:30:33.268912   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.269386   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.269408   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.270003   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.270070   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.270285   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.270670   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.270690   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.271058   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.271281   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.272388   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.272895   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.272921   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.273067   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.273237   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.273355   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.273458   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.273740   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.274080   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.275548   13842 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:30:33.275560   13842 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:30:33.276670   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.276700   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:30:33.276722   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.276675   13842 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:30:33.278040   13842 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:33.278062   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:30:33.278081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.281119   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.281589   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.281860   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.282081   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282129   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.282266   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.282598   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.281510   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282680   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282710   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282742   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.282767   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.282784   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.282963   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.284659   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0912 21:30:33.285034   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.285737   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.285767   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.286142   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.286339   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.287706   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0912 21:30:33.287900   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0912 21:30:33.288046   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.288069   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288168   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.288576   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288598   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288743   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.288759   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.288856   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289114   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.289153   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.289708   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:33.290010   13842 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:30:33.290749   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.291355   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.292706   13842 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:30:33.292711   13842 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:30:33.292715   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293836   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:33.293894   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:30:33.293913   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.293847   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.293963   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:30:33.293979   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.296001   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:30:33.297175   13842 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:33.297189   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:30:33.297204   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.297379   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.297549   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298027   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298042   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298070   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.298082   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.298305   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298341   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.298504   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298574   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.298639   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298712   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.298778   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299074   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.299967   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.300311   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.300338   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.301763   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.301987   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.302125   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.302244   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.306121   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0912 21:30:33.306524   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:33.306887   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:33.306904   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:33.307338   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:33.307506   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	W0912 21:30:33.308193   13842 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.308214   13842 retry.go:31] will retry after 340.22316ms: ssh: handshake failed: read tcp 192.168.39.1:56174->192.168.39.67:22: read: connection reset by peer
	I0912 21:30:33.309320   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:33.311143   13842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:30:33.312425   13842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.312441   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:30:33.312456   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:33.315180   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315769   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:33.315798   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:33.315962   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:33.316179   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:33.316377   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:33.316513   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:33.639453   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:33.639482   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:30:33.657578   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:33.657597   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:30:33.680952   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:33.680978   13842 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:30:33.733177   13842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:30:33.733181   13842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:30:33.743215   13842 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:33.743241   13842 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:30:33.762069   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:33.762098   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:30:33.782751   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:33.785088   13842 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:33.785111   13842 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:30:33.792263   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:33.836509   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:30:33.868944   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:33.868973   13842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:30:33.904688   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:33.911394   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:33.911420   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:30:33.913031   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:33.922465   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:33.922491   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:30:33.927414   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:33.927438   13842 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:30:33.941076   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:33.942361   13842 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:33.942383   13842 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:30:33.962765   13842 node_ready.go:35] waiting up to 6m0s for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965689   13842 node_ready.go:49] node "addons-694635" has status "Ready":"True"
	I0912 21:30:33.965712   13842 node_ready.go:38] duration metric: took 2.919714ms for node "addons-694635" to be "Ready" ...
	I0912 21:30:33.965723   13842 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:33.971996   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:33.978042   13842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:33.978064   13842 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:30:34.048949   13842 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.048968   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:30:34.093153   13842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.093183   13842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:30:34.128832   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:34.128859   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:30:34.163298   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:34.163328   13842 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:30:34.173254   13842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:34.173281   13842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:30:34.177529   13842 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:34.177559   13842 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:30:34.215996   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:34.285198   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:30:34.287981   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:34.309345   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:34.315086   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:34.315113   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:30:34.354466   13842 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.354493   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:30:34.374522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:34.374556   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:30:34.393891   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:34.393921   13842 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:30:34.502563   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:34.502588   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:30:34.584726   13842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:34.584760   13842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:30:34.607498   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:34.645255   13842 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:34.645280   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:30:34.718335   13842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:34.718361   13842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:30:34.783759   13842 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:34.783787   13842 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:30:34.940148   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:35.030796   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:35.030824   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:30:35.144522   13842 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.144548   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:30:35.191648   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:35.191688   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:30:35.435800   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:35.467895   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:35.467918   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:30:35.684867   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:35.684898   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:30:35.859788   13842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:35.859822   13842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:30:35.932925   13842 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.199703683s)
	I0912 21:30:35.932952   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.150160783s)
	I0912 21:30:35.932956   13842 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:30:35.933005   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933018   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933032   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.140722926s)
	I0912 21:30:35.933074   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933089   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933413   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933461   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.933469   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933483   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933492   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933500   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933505   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933515   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.933523   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.933530   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.933745   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.933759   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.934193   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.934238   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.934260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.956608   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:35.956638   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:35.956922   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:35.956968   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:35.956988   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:35.992917   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:36.227480   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:36.438013   13842 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-694635" context rescaled to 1 replicas
	I0912 21:30:37.249809   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.413260898s)
	I0912 21:30:37.249867   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.249888   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250165   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250185   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:37.250200   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:37.250209   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:37.250454   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:37.250474   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.021956   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:38.703385   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.798660977s)
	I0912 21:30:38.703445   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703459   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.703792   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.703811   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.703811   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:38.703820   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:38.703827   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:38.704152   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:38.704197   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:38.704207   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023100   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.110032197s)
	I0912 21:30:39.023152   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023164   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023211   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.082101447s)
	I0912 21:30:39.023263   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.807232005s)
	I0912 21:30:39.023297   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023313   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023273   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023386   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023407   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023426   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023454   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023474   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023498   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023509   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023525   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023536   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023545   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023642   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023673   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023685   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.023689   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.023693   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.023701   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.023736   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.023747   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025326   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:39.025330   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025342   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.025481   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.025492   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:39.139049   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:39.139382   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:39.139403   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:39.139432   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:40.261224   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:30:40.261266   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.264217   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264583   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.264613   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.264808   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.265022   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.265208   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.265354   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:40.483338   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:40.539106   13842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:30:40.689076   13842 addons.go:234] Setting addon gcp-auth=true in "addons-694635"
	I0912 21:30:40.689138   13842 host.go:66] Checking if "addons-694635" exists ...
	I0912 21:30:40.689446   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.689471   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.705390   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0912 21:30:40.705838   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.706274   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.706296   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.706632   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.707109   13842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:30:40.707133   13842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:30:40.722882   13842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0912 21:30:40.723304   13842 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:30:40.723787   13842 main.go:141] libmachine: Using API Version  1
	I0912 21:30:40.723806   13842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:30:40.724121   13842 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:30:40.724311   13842 main.go:141] libmachine: (addons-694635) Calling .GetState
	I0912 21:30:40.725649   13842 main.go:141] libmachine: (addons-694635) Calling .DriverName
	I0912 21:30:40.725862   13842 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:30:40.725882   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHHostname
	I0912 21:30:40.728400   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.728878   13842 main.go:141] libmachine: (addons-694635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:43:77", ip: ""} in network mk-addons-694635: {Iface:virbr1 ExpiryTime:2024-09-12 22:30:01 +0000 UTC Type:0 Mac:52:54:00:6b:43:77 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-694635 Clientid:01:52:54:00:6b:43:77}
	I0912 21:30:40.728898   13842 main.go:141] libmachine: (addons-694635) DBG | domain addons-694635 has defined IP address 192.168.39.67 and MAC address 52:54:00:6b:43:77 in network mk-addons-694635
	I0912 21:30:40.729103   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHPort
	I0912 21:30:40.729271   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHKeyPath
	I0912 21:30:40.729386   13842 main.go:141] libmachine: (addons-694635) Calling .GetSSHUsername
	I0912 21:30:40.729528   13842 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/addons-694635/id_rsa Username:docker}
	I0912 21:30:41.942865   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.657623757s)
	I0912 21:30:41.942920   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942926   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.654910047s)
	I0912 21:30:41.942947   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.942963   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.942980   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.633591683s)
	I0912 21:30:41.942931   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943026   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943030   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.335497924s)
	I0912 21:30:41.943040   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943062   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943074   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943136   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.002958423s)
	W0912 21:30:41.943188   13842 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943217   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.50737724s)
	I0912 21:30:41.943330   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943349   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943386   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943399   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943401   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943408   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943418   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943425   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943429   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943445   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943457   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943467   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943470   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943477   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943479   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943485   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943487   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943494   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943496   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943505   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943512   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.943221   13842 retry.go:31] will retry after 361.478049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:41.943575   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.943601   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.943608   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.943616   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:41.943622   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:41.945219   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945224   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945234   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945235   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945249   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945260   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945251   13842 addons.go:475] Verifying addon registry=true in "addons-694635"
	I0912 21:30:41.945434   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945436   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945446   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945457   13842 addons.go:475] Verifying addon ingress=true in "addons-694635"
	I0912 21:30:41.945655   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945674   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.945683   13842 addons.go:475] Verifying addon metrics-server=true in "addons-694635"
	I0912 21:30:41.945756   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:41.945793   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:41.945806   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:41.946676   13842 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-694635 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:30:41.946688   13842 out.go:177] * Verifying registry addon...
	I0912 21:30:41.948418   13842 out.go:177] * Verifying ingress addon...
	I0912 21:30:41.949076   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:30:41.950349   13842 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:30:41.954743   13842 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:30:41.954774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:41.960928   13842 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:30:41.960949   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.305973   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:42.467232   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.477555   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:42.764449   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:42.797806   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.570260767s)
	I0912 21:30:42.797869   13842 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.071984177s)
	I0912 21:30:42.797869   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.797989   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798300   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798313   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798323   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:42.798331   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:42.798617   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:42.798639   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:42.798649   13842 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-694635"
	I0912 21:30:42.799295   13842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:42.800145   13842 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:30:42.801601   13842 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:30:42.802781   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:30:42.803047   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:42.803064   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:30:42.817988   13842 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:30:42.818009   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.900221   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:42.900257   13842 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:30:42.960615   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:42.960989   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.009576   13842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.009605   13842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:30:43.147089   13842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:43.320966   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.453136   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.454373   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:43.808102   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.953362   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:43.958697   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.162942   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.856921696s)
	I0912 21:30:44.163000   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163016   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163309   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.163366   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.163381   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.163328   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.163393   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.163848   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.164957   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.164983   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.378590   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.427113   13842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.279974028s)
	I0912 21:30:44.427173   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427193   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427495   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427544   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.427559   13842 main.go:141] libmachine: Making call to close driver server
	I0912 21:30:44.427568   13842 main.go:141] libmachine: (addons-694635) Calling .Close
	I0912 21:30:44.427499   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427772   13842 main.go:141] libmachine: (addons-694635) DBG | Closing plugin on server side
	I0912 21:30:44.427798   13842 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:30:44.427814   13842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:30:44.429338   13842 addons.go:475] Verifying addon gcp-auth=true in "addons-694635"
	I0912 21:30:44.431064   13842 out.go:177] * Verifying gcp-auth addon...
	I0912 21:30:44.432961   13842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:30:44.468784   13842 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:30:44.468806   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.469261   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.469425   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.809517   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.936881   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:44.953105   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:44.954618   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:44.978466   13842 pod_ready.go:103] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:45.312534   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.436603   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.454472   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.458065   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:45.478156   13842 pod_ready.go:98] pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.67 HostIPs:[{IP:192.168.39.
67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0009cbb30}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478190   13842 pod_ready.go:82] duration metric: took 11.506167543s for pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace to be "Ready" ...
	E0912 21:30:45.478205   13842 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-pcjz8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.67 HostIPs:[{IP:192.168.39.67}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-12 21:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-12 21:30:38 +0000 UTC,FinishedAt:2024-09-12 21:30:43 +0000 UTC,ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://50b8193e0418edb8169cdabdeb19b0c793d761211e7e0547b53bda047e46367d Started:0xc0028a6700 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0009cbb20} {Name:kube-api-access-r9jtw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0009cbb30}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:30:45.478217   13842 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486926   13842 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.486961   13842 pod_ready.go:82] duration metric: took 8.733099ms for pod "coredns-7c65d6cfc9-rpsn9" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.486974   13842 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493880   13842 pod_ready.go:93] pod "etcd-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.493917   13842 pod_ready.go:82] duration metric: took 6.934283ms for pod "etcd-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.493933   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500231   13842 pod_ready.go:93] pod "kube-apiserver-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.500262   13842 pod_ready.go:82] duration metric: took 6.319725ms for pod "kube-apiserver-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.500276   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508921   13842 pod_ready.go:93] pod "kube-controller-manager-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.508952   13842 pod_ready.go:82] duration metric: took 8.661364ms for pod "kube-controller-manager-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.508966   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.807845   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.875520   13842 pod_ready.go:93] pod "kube-proxy-4hcfx" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:45.875543   13842 pod_ready.go:82] duration metric: took 366.569724ms for pod "kube-proxy-4hcfx" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.875552   13842 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:45.936184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:45.953664   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:45.955104   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.275644   13842 pod_ready.go:93] pod "kube-scheduler-addons-694635" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:46.275666   13842 pod_ready.go:82] duration metric: took 400.107483ms for pod "kube-scheduler-addons-694635" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:46.275674   13842 pod_ready.go:39] duration metric: took 12.309938834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:46.275689   13842 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:30:46.275751   13842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:46.301756   13842 api_server.go:72] duration metric: took 13.173948128s to wait for apiserver process to appear ...
	I0912 21:30:46.301775   13842 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:30:46.301792   13842 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0912 21:30:46.305735   13842 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0912 21:30:46.306725   13842 api_server.go:141] control plane version: v1.31.1
	I0912 21:30:46.306743   13842 api_server.go:131] duration metric: took 4.962021ms to wait for apiserver health ...
	I0912 21:30:46.306750   13842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:30:46.309045   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.436328   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.454711   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.455101   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:46.480691   13842 system_pods.go:59] 18 kube-system pods found
	I0912 21:30:46.480719   13842 system_pods.go:61] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.480728   13842 system_pods.go:61] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.480735   13842 system_pods.go:61] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.480742   13842 system_pods.go:61] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.480746   13842 system_pods.go:61] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.480750   13842 system_pods.go:61] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.480754   13842 system_pods.go:61] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.480761   13842 system_pods.go:61] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.480765   13842 system_pods.go:61] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.480770   13842 system_pods.go:61] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.480775   13842 system_pods.go:61] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.480784   13842 system_pods.go:61] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.480794   13842 system_pods.go:61] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.480800   13842 system_pods.go:61] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.480808   13842 system_pods.go:61] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480814   13842 system_pods.go:61] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.480818   13842 system_pods.go:61] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.480823   13842 system_pods.go:61] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.480830   13842 system_pods.go:74] duration metric: took 174.075986ms to wait for pod list to return data ...
	I0912 21:30:46.480840   13842 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:30:46.676516   13842 default_sa.go:45] found service account: "default"
	I0912 21:30:46.676544   13842 default_sa.go:55] duration metric: took 195.698229ms for default service account to be created ...
	I0912 21:30:46.676555   13842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:30:46.808312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.882566   13842 system_pods.go:86] 18 kube-system pods found
	I0912 21:30:46.882593   13842 system_pods.go:89] "coredns-7c65d6cfc9-rpsn9" [cb2ce549-2d5c-45ec-a46d-562d4acd82ea] Running
	I0912 21:30:46.882601   13842 system_pods.go:89] "csi-hostpath-attacher-0" [a560e36c-e029-47d5-95b8-be2420d7df22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:46.882607   13842 system_pods.go:89] "csi-hostpath-resizer-0" [0d9f13f4-8ae3-49fb-91d2-588c2a5103b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:46.882615   13842 system_pods.go:89] "csi-hostpathplugin-kdtz6" [88fdf5ba-c8ac-455b-ae75-dbdecf76e19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:46.882619   13842 system_pods.go:89] "etcd-addons-694635" [9a285fb7-743e-4e27-a017-524fb6ed02a4] Running
	I0912 21:30:46.882624   13842 system_pods.go:89] "kube-apiserver-addons-694635" [613a8945-2f24-42d9-b005-2ee3a61d6b63] Running
	I0912 21:30:46.882627   13842 system_pods.go:89] "kube-controller-manager-addons-694635" [a73aee0b-e5db-4bfc-a0d7-526c7a9515b3] Running
	I0912 21:30:46.882632   13842 system_pods.go:89] "kube-ingress-dns-minikube" [22649b3c-8428-4122-bf69-ab76864aaa7e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:30:46.882638   13842 system_pods.go:89] "kube-proxy-4hcfx" [17176328-abc9-4540-ac4c-c63083724812] Running
	I0912 21:30:46.882642   13842 system_pods.go:89] "kube-scheduler-addons-694635" [69be5c79-853a-4fe4-b43c-c332b6276913] Running
	I0912 21:30:46.882647   13842 system_pods.go:89] "metrics-server-84c5f94fbc-v4b7g" [4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:46.882653   13842 system_pods.go:89] "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 21:30:46.882659   13842 system_pods.go:89] "registry-66c9cd494c-7cpwk" [4b56665b-2953-4567-aa4d-49eb198ea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:46.882665   13842 system_pods.go:89] "registry-proxy-ckz5n" [317b8f58-7fa3-4666-be84-9fcc8574a1f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:46.882670   13842 system_pods.go:89] "snapshot-controller-56fcc65765-bnf26" [35975eec-fc25-416d-b56e-107978e82e7d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882678   13842 system_pods.go:89] "snapshot-controller-56fcc65765-hmbfj" [171ee08c-156a-49ae-8f7d-7009bc0ac41c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:46.882683   13842 system_pods.go:89] "storage-provisioner" [8f49f988-6d5b-4cb6-a9a4-f15fec6617ee] Running
	I0912 21:30:46.882691   13842 system_pods.go:89] "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:46.882697   13842 system_pods.go:126] duration metric: took 206.137533ms to wait for k8s-apps to be running ...
	I0912 21:30:46.882703   13842 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:30:46.882743   13842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:30:46.925829   13842 system_svc.go:56] duration metric: took 43.114101ms WaitForService to wait for kubelet
	I0912 21:30:46.925861   13842 kubeadm.go:582] duration metric: took 13.798055946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:46.925881   13842 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:30:46.936949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:46.954044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:46.954652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.077031   13842 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:30:47.077069   13842 node_conditions.go:123] node cpu capacity is 2
	I0912 21:30:47.077086   13842 node_conditions.go:105] duration metric: took 151.197367ms to run NodePressure ...
	I0912 21:30:47.077102   13842 start.go:241] waiting for startup goroutines ...
	I0912 21:30:47.306659   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.436922   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.454133   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:47.455284   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.807878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.936979   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:47.954401   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:47.955301   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.308026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.436963   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:48.456522   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:48.457189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:48.807641   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.086497   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.086504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.087121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.307899   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.436969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.452710   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:49.455147   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.808000   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.940753   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:49.971990   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:49.972275   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.306737   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.436059   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.452909   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.455902   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:50.807091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.935993   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:50.953464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:50.954524   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.308257   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.436479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.452352   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.453795   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:51.807739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.936798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:51.953151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:51.955301   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.436742   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.452578   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.454290   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:52.808168   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.936339   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:52.953730   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:52.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.307714   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.438307   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.454049   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.454999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:53.809141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:53.937475   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:53.953075   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:53.956110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.309453   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.437498   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.452997   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.454232   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:54.808290   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:54.937121   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:54.953554   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:54.954933   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.308403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.436189   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.453910   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.455288   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:55.808688   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:55.936880   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:55.953026   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:55.954088   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.307678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.438816   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.453756   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.454145   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:56.806670   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:56.938510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:56.953471   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:56.956690   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.307668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.436695   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.456044   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.456392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:57.808216   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:57.936313   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:57.953978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:57.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.307798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.437125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.454751   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.457211   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:58.807968   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:58.937010   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:58.953141   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:58.959276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.308291   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.436266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.453642   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:59.455378   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.808750   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:59.937681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:30:59.955468   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:30:59.955848   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.308635   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.436913   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.453130   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.454282   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:00.807146   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:00.936739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:00.953015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:00.954765   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.306985   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.436195   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.453123   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.454341   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:01.807013   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:01.936537   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.952370   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:01.954597   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.307157   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.436510   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.452446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.454782   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:02.807320   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:02.983700   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.983759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:02.984366   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.307411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.436395   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.453271   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.454447   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:03.807454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:03.936777   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.952668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:03.955100   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.307745   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.436831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.452778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.455238   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:04.807569   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:04.936849   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.953099   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:04.955331   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.307263   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.436369   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.455274   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.455523   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:05.807911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:05.936890   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.953011   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:05.954859   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.308088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.436094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:06.453015   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:06.454185   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:06.807536   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:06.937265   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.294221   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.294459   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.394402   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.436598   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.452707   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.454367   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:07.807204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:07.936209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.953204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:07.954372   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.307069   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.452844   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.456371   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:08.807416   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:08.936870   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.952721   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:08.954434   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.436768   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.452696   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.454244   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:09.806900   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:09.936202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.952947   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:09.954077   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.310715   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.436442   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.453775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.454308   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:10.807926   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:10.936446   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.952829   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:10.954777   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.307638   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.437017   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.455266   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.455579   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:11.808062   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:11.936788   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.953110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:11.955323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.309018   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.437559   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.452853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.455591   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:12.807821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:12.936153   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.952946   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:12.955049   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.308125   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.436685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.453405   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.454409   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:13.808343   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:13.936831   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.953008   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:13.955615   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.307410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.439286   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.460392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.461660   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:14.808029   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:14.937360   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.953551   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:14.955229   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.308853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.802413   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.802546   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.802929   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:15.806810   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:15.935781   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.953409   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:15.954622   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.307574   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.436906   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.454204   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:16.454314   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.807151   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:16.936285   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.954876   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:16.954961   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.308273   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.436690   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.452851   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.454581   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:17.808378   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:17.937233   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.953506   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:17.954633   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.307978   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.438381   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.452394   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.454983   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:18.808450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:18.937057   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.954873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:18.954917   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.307860   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.443523   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.451685   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.454121   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:19.808677   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:19.942749   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.954209   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:19.955400   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.308312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.436764   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.453650   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.455934   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:20.809185   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:20.937034   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.953356   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:20.954469   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.306918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.436565   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.452318   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.454075   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:21.807969   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:21.936459   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.952911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:21.954462   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.308342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:22.436293   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:22.454954   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:22.455186   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:22.807592   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.028341   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.028457   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.028520   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.307479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.436556   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.453994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.454062   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:23.807759   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:23.936678   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.953231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:23.954392   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.307358   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.436892   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.453479   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.455733   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:24.807681   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:24.936504   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.952491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:24.955015   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.307494   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:25.437838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:25.454660   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:25.455196   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:25.806376   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.169088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.169141   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.169576   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.308047   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.438798   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.454085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.454874   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:26.808511   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:26.936179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.953217   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:26.955020   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.307867   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.436967   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.453064   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:31:27.454221   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:27.808241   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:27.936433   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.954010   13842 kapi.go:107] duration metric: took 46.004930815s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:31:27.954819   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.308179   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.436505   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.455109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:28.807480   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:28.936668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.954245   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.306669   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.436989   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.455085   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:29.817843   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:29.937454   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.956102   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.308652   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.437396   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.454614   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:30.807604   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:30.936840   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.954423   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.308447   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.437404   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.454276   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:31.807324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:31.936952   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.954363   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.306415   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.437242   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.454652   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:32.807329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:32.936869   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.954340   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.307184   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.436873   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.454653   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:33.810231   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:33.937220   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.954601   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.307392   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.958058   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.958295   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:34.958411   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:34.961259   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.961741   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.307464   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.437024   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.455092   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:35.808111   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:35.937085   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.955030   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.307832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.438403   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.457831   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:36.808182   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:36.939647   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:36.955818   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.307778   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.436832   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.454110   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:37.807859   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:37.936514   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:37.955016   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.307838   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.436456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.454686   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:38.808567   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:38.941164   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:38.956269   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:39.307122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:39.437203   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:39.454703   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.078488   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.079334   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.079654   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.307212   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.436878   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.538252   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:40.807485   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:40.938491   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:40.955935   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.308214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.436295   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.454533   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:41.807705   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:41.943420   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:41.954960   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.308025   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.439095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.454338   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:42.807582   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:42.937122   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:42.955099   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.406903   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.436443   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.455666   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:43.807519   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:43.937682   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:43.954323   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.306738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.436834   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.454320   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:44.815595   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:44.938314   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:44.954595   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.308036   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.455327   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:45.807991   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:45.962606   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:45.967707   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.307128   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.436949   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.455549   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:46.807608   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:46.937589   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:46.958969   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.307738   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.436911   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.454432   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:47.811530   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:47.936953   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:47.955680   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.308202   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.437342   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.456109   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:48.815410   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:48.936379   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:48.955189   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.307918   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.436235   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.454487   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:49.812324   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:49.936703   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:49.954166   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.308053   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.437110   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.455802   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:50.808329   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:50.936571   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:50.955407   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.307733   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.438936   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.474999   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:51.807267   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:51.937095   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:51.955402   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.307348   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.436276   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.455029   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:52.807657   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:52.937207   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:52.954953   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.307507   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.437088   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.454370   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:53.807469   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:53.937040   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:53.954745   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.307579   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.437891   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.757207   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:54.809668   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:54.937739   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:54.958776   13842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:31:55.307785   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.436060   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:55.454674   13842 kapi.go:107] duration metric: took 1m13.504323658s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:31:55.807214   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:55.936450   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.308210   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.528172   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:56.807634   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:56.936775   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.307995   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.436434   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:57.817862   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:57.936850   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.307245   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.436887   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:58.808853   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:58.936774   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.307234   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.436533   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:59.808299   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:31:59.935885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.307456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:00.437156   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.964683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:00.965821   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.312456   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.436422   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:01.808885   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:01.937181   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:32:02.318607   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:02.437876   13842 kapi.go:107] duration metric: took 1m18.004909184s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:32:02.439347   13842 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-694635 cluster.
	I0912 21:32:02.440699   13842 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:32:02.441821   13842 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:32:02.807994   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.308094   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:03.808683   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.307312   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:04.808877   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.308455   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:05.808430   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.316091   13842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:32:06.808681   13842 kapi.go:107] duration metric: took 1m24.005897654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:32:06.810775   13842 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0912 21:32:06.812317   13842 addons.go:510] duration metric: took 1m33.684465733s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0912 21:32:06.812359   13842 start.go:246] waiting for cluster config update ...
	I0912 21:32:06.812380   13842 start.go:255] writing updated cluster config ...
	I0912 21:32:06.812657   13842 ssh_runner.go:195] Run: rm -f paused
	I0912 21:32:06.863917   13842 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:32:06.865782   13842 out.go:177] * Done! kubectl is now configured to use "addons-694635" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.131376520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609131339660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a3ae537-e148-4bd6-9694-602febfd53fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.132823918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e779aa7-536f-4912-9c08-d5219a09c411 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.132905704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e779aa7-536f-4912-9c08-d5219a09c411 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.133317246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637
213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e779aa7-536f-4912-9c08-d5219a09c411 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.171783992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35eb1b13-f727-4309-aad4-3edaacb22e44 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.171871009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35eb1b13-f727-4309-aad4-3edaacb22e44 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.173189507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13428e88-5548-45f3-b63b-65c05b833bb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.174430497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609174400417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13428e88-5548-45f3-b63b-65c05b833bb8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.175044773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fc6ea67-3328-49b4-9bc3-604d8304843b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.175123749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fc6ea67-3328-49b4-9bc3-604d8304843b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.175385976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637
213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fc6ea67-3328-49b4-9bc3-604d8304843b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.210179494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f395f723-82de-443d-8a86-f4b546a452fa name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.210275703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f395f723-82de-443d-8a86-f4b546a452fa name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.211425754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1eb40764-34e7-4435-9191-5137799a4ecf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.212627165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609212602535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1eb40764-34e7-4435-9191-5137799a4ecf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.213114670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09cacb7c-4bc1-4998-b169-1df27951daf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.213176628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09cacb7c-4bc1-4998-b169-1df27951daf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.213444945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637
213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09cacb7c-4bc1-4998-b169-1df27951daf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.249087092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad7903d7-01dd-4753-9b3f-96a4ce1425b2 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.249172200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad7903d7-01dd-4753-9b3f-96a4ce1425b2 name=/runtime.v1.RuntimeService/Version
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.250281574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1f20eff-39e6-4104-9d5b-33b79d235660 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.252174958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609252149817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1f20eff-39e6-4104-9d5b-33b79d235660 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.252753637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=912b45ac-3bef-418e-8a2c-fc98e1284f5d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.252806795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=912b45ac-3bef-418e-8a2c-fc98e1284f5d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 21:46:49 addons-694635 crio[662]: time="2024-09-12 21:46:49.253057150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68df5018ee9b9c8b040980f7b13e5f8cd660087c416d49062434ac1567d9ff1b,PodSandboxId:1ae8f2e321f0f9eadaba61d67d63cc3cb8c715a45a4ebedc12f1b6516e36b891,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726177414971816754,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8wzs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c11e9909-be91-42a2-973f-3ec56c134bed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f49cc7f3e63d860a0b154ce1d0a027f105c70027b67a50ab5d73a13191309a,PodSandboxId:9d3e688e943f8b1412681f72bcbb2d49d4d9a3e4a04b3cac9a3ab31dca0efc68,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726177277424664218,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6d172e45-acae-4863-b4f1-7cf6c870a3d8,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765,PodSandboxId:e71b5d7408e655bb8c96a5d654726777d547179b47272efaaa970adf10a2ee35,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726176721533597537,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-px7q4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: ec2ec8bf-cb0a-47eb-b117-c3e51f68cafc,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c0d8468e1a5daad3c86161040af5d9affffdd5c20705a3f71d2903c6243d96,PodSandboxId:f1b6fca0a1b4a528f24874cf3deb296ed28cf61228310af6f8b71a38b1bc2f1c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726176691385084595,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v4b7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38,PodSandboxId:bb6d26e8124017f968cdbd7d1e9d6dc8f51c932a1d588df39950c0a71e8dea66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726176640283421177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f49f988-6d5b-4cb6-a9a4-f15fec6617ee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b,PodSandboxId:52798c65c361b446fc2229d3223995b78422a1931e70180eea1ef814625c958e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726176637
213238542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpsn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2ce549-2d5c-45ec-a46d-562d4acd82ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1,PodSandboxId:00dce38c65e40888f99c4531feab924cf6ecb4c5171d13070c643118572341c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726176634905138174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4hcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17176328-abc9-4540-ac4c-c63083724812,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546,PodSandboxId:af67c2341731309439d1fb9ac03831771a23928c83b1b1bc5a445be50d7b8c93,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726176623547228673,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b876c14c875d4b53e5c61f3bdb6b61f2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600,PodSandboxId:8f5fcc20744c5a49bd5023165e3ffeed38dc69330f0025dc1df0829da8a54879,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726176623493601030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6a101dce97ee820fc22e8980fa1bd2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863,PodSandboxId:e1566071cac6e7c7300f541dd70faf52b58c8b1f654f49885e6ff61047017313,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726176623462786884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0f4581a8ddd13059907f5e64c9ddcf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba,PodSandboxId:8ab56f691eeeaa15cc50d49aeca3a855097da9e407580c18dde97d5293281963,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726176623451400362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-694635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eeb62b2ef7f8ac332344239844358b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=912b45ac-3bef-418e-8a2c-fc98e1284f5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68df5018ee9b9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   1ae8f2e321f0f       hello-world-app-55bf9c44b4-8wzs4
	15f49cc7f3e63       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   9d3e688e943f8       nginx
	224662c30f376       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   e71b5d7408e65       gcp-auth-89d5ffd79-px7q4
	01c0d8468e1a5       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   f1b6fca0a1b4a       metrics-server-84c5f94fbc-v4b7g
	c63491974a86d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        16 minutes ago      Running             storage-provisioner       0                   bb6d26e812401       storage-provisioner
	1a9fbfbdc2579       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        16 minutes ago      Running             coredns                   0                   52798c65c361b       coredns-7c65d6cfc9-rpsn9
	aa4b1b8007598       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        16 minutes ago      Running             kube-proxy                0                   00dce38c65e40       kube-proxy-4hcfx
	daff578fb9bc4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   af67c23417313       kube-scheduler-addons-694635
	04006273204a6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   8f5fcc20744c5       kube-apiserver-addons-694635
	3ad45dbfb61b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   e1566071cac6e       etcd-addons-694635
	5c2e331dbfead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   8ab56f691eeea       kube-controller-manager-addons-694635
	
	
	==> coredns [1a9fbfbdc25792944bc7f0738f91a9c4ca524f80d4c4ef8065875105ad68d91b] <==
	[INFO] 127.0.0.1:55335 - 14088 "HINFO IN 1593280896951240425.6479746786649468559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009751103s
	[INFO] 10.244.0.8:55681 - 3740 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000376198s
	[INFO] 10.244.0.8:55681 - 64158 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228403s
	[INFO] 10.244.0.8:37781 - 47777 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000252945s
	[INFO] 10.244.0.8:37781 - 7076 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147556s
	[INFO] 10.244.0.8:41819 - 26826 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000226016s
	[INFO] 10.244.0.8:41819 - 4808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010299s
	[INFO] 10.244.0.8:36322 - 25419 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092438s
	[INFO] 10.244.0.8:36322 - 47689 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000194058s
	[INFO] 10.244.0.8:52027 - 25674 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158473s
	[INFO] 10.244.0.8:52027 - 28495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000211396s
	[INFO] 10.244.0.8:60142 - 5226 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072599s
	[INFO] 10.244.0.8:60142 - 8039 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122511s
	[INFO] 10.244.0.8:50355 - 29794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050766s
	[INFO] 10.244.0.8:50355 - 16480 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152532s
	[INFO] 10.244.0.8:38422 - 32454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054761s
	[INFO] 10.244.0.8:38422 - 36548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127267s
	[INFO] 10.244.0.22:60865 - 4263 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000466894s
	[INFO] 10.244.0.22:39371 - 54519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000098861s
	[INFO] 10.244.0.22:41806 - 53233 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138737s
	[INFO] 10.244.0.22:36774 - 22315 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063899s
	[INFO] 10.244.0.22:57836 - 41268 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128874s
	[INFO] 10.244.0.22:60541 - 59176 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161626s
	[INFO] 10.244.0.22:53240 - 37260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004441249s
	[INFO] 10.244.0.22:51419 - 44769 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004780269s
	
	
	==> describe nodes <==
	Name:               addons-694635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-694635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-694635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_30_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-694635
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-694635
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:44:04 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:44:04 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:44:04 +0000   Thu, 12 Sep 2024 21:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:44:04 +0000   Thu, 12 Sep 2024 21:30:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    addons-694635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 13b099cf91f8442286dd9014ad34a5eb
	  System UUID:                13b099cf-91f8-4422-86dd-9014ad34a5eb
	  Boot ID:                    e094f473-e531-4253-a8aa-4f2a067e9156
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-8wzs4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  gcp-auth                    gcp-auth-89d5ffd79-px7q4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-rpsn9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-694635                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-694635             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-694635    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4hcfx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-694635             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-694635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-694635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-694635 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m   kubelet          Node addons-694635 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node addons-694635 event: Registered Node addons-694635 in Controller
	
	
	==> dmesg <==
	[Sep12 21:31] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.489065] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.929989] kauditd_printk_skb: 27 callbacks suppressed
	[ +10.095844] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.073125] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.683105] kauditd_printk_skb: 81 callbacks suppressed
	[  +7.372236] kauditd_printk_skb: 32 callbacks suppressed
	[Sep12 21:32] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.856647] kauditd_printk_skb: 16 callbacks suppressed
	[ +29.701828] kauditd_printk_skb: 40 callbacks suppressed
	[Sep12 21:33] kauditd_printk_skb: 30 callbacks suppressed
	[Sep12 21:35] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 21:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.238101] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.551734] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.393117] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.485586] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.071553] kauditd_printk_skb: 25 callbacks suppressed
	[ +10.586398] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.540652] kauditd_printk_skb: 43 callbacks suppressed
	[Sep12 21:41] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.241626] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.193519] kauditd_printk_skb: 21 callbacks suppressed
	[Sep12 21:43] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3ad45dbfb61b732019b2446eb37b838159475578e53421516d318b1d17d0d863] <==
	{"level":"info","ts":"2024-09-12T21:32:27.442779Z","caller":"traceutil/trace.go:171","msg":"trace[1254642931] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1240; }","duration":"263.753575ms","start":"2024-09-12T21:32:27.179019Z","end":"2024-09-12T21:32:27.442773Z","steps":["trace[1254642931] 'agreement among raft nodes before linearized reading'  (duration: 263.705568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:32:27.442948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.756935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-09-12T21:32:27.442984Z","caller":"traceutil/trace.go:171","msg":"trace[1578547455] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1240; }","duration":"262.791577ms","start":"2024-09-12T21:32:27.180186Z","end":"2024-09-12T21:32:27.442977Z","steps":["trace[1578547455] 'agreement among raft nodes before linearized reading'  (duration: 262.70651ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746598Z","caller":"traceutil/trace.go:171","msg":"trace[1981957924] linearizableReadLoop","detail":"{readStateIndex:2127; appliedIndex:2126; }","duration":"133.477931ms","start":"2024-09-12T21:40:19.613083Z","end":"2024-09-12T21:40:19.746561Z","steps":["trace[1981957924] 'read index received'  (duration: 133.318567ms)","trace[1981957924] 'applied index is now lower than readState.Index'  (duration: 158.878µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:19.746825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.6822ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:19.746858Z","caller":"traceutil/trace.go:171","msg":"trace[1975095780] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1989; }","duration":"133.772244ms","start":"2024-09-12T21:40:19.613077Z","end":"2024-09-12T21:40:19.746850Z","steps":["trace[1975095780] 'agreement among raft nodes before linearized reading'  (duration: 133.667003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:19.746680Z","caller":"traceutil/trace.go:171","msg":"trace[784585044] transaction","detail":"{read_only:false; response_revision:1989; number_of_response:1; }","duration":"282.702863ms","start":"2024-09-12T21:40:19.463956Z","end":"2024-09-12T21:40:19.746659Z","steps":["trace[784585044] 'process raft request'  (duration: 282.48487ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:24.366865Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1527}
	{"level":"info","ts":"2024-09-12T21:40:24.408830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1527,"took":"41.110259ms","hash":3946649684,"current-db-size-bytes":6709248,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-12T21:40:24.408900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3946649684,"revision":1527,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T21:40:40.024996Z","caller":"traceutil/trace.go:171","msg":"trace[2045705986] transaction","detail":"{read_only:false; response_revision:2179; number_of_response:1; }","duration":"188.170203ms","start":"2024-09-12T21:40:39.836812Z","end":"2024-09-12T21:40:40.024982Z","steps":["trace[2045705986] 'process raft request'  (duration: 187.576243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.025569Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.896897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.025770Z","caller":"traceutil/trace.go:171","msg":"trace[1651034224] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:2179; }","duration":"185.132257ms","start":"2024-09-12T21:40:39.840570Z","end":"2024-09-12T21:40:40.025702Z","steps":["trace[1651034224] 'agreement among raft nodes before linearized reading'  (duration: 184.872808ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.027031Z","caller":"traceutil/trace.go:171","msg":"trace[737774189] linearizableReadLoop","detail":"{readStateIndex:2324; appliedIndex:2323; }","duration":"184.07988ms","start":"2024-09-12T21:40:39.840574Z","end":"2024-09-12T21:40:40.024654Z","steps":["trace[737774189] 'read index received'  (duration: 183.713847ms)","trace[737774189] 'applied index is now lower than readState.Index'  (duration: 365.525µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T21:40:40.027339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.934654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-12T21:40:40.027410Z","caller":"traceutil/trace.go:171","msg":"trace[100333331] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2179; }","duration":"162.010795ms","start":"2024-09-12T21:40:39.865389Z","end":"2024-09-12T21:40:40.027400Z","steps":["trace[100333331] 'agreement among raft nodes before linearized reading'  (duration: 161.762163ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:40.220357Z","caller":"traceutil/trace.go:171","msg":"trace[1115025117] linearizableReadLoop","detail":"{readStateIndex:2325; appliedIndex:2324; }","duration":"186.564755ms","start":"2024-09-12T21:40:40.033761Z","end":"2024-09-12T21:40:40.220326Z","steps":["trace[1115025117] 'read index received'  (duration: 186.518061ms)","trace[1115025117] 'applied index is now lower than readState.Index'  (duration: 45.997µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:40:40.220626Z","caller":"traceutil/trace.go:171","msg":"trace[1874224401] transaction","detail":"{read_only:false; response_revision:2180; number_of_response:1; }","duration":"187.429481ms","start":"2024-09-12T21:40:40.033184Z","end":"2024-09-12T21:40:40.220614Z","steps":["trace[1874224401] 'process raft request'  (duration: 186.678055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.086416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220822Z","caller":"traceutil/trace.go:171","msg":"trace[838300705] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2180; }","duration":"187.131549ms","start":"2024-09-12T21:40:40.033683Z","end":"2024-09-12T21:40:40.220815Z","steps":["trace[838300705] 'agreement among raft nodes before linearized reading'  (duration: 187.072562ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:40:40.220927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.744825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:40:40.220957Z","caller":"traceutil/trace.go:171","msg":"trace[524765721] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:2180; }","duration":"186.778424ms","start":"2024-09-12T21:40:40.034173Z","end":"2024-09-12T21:40:40.220952Z","steps":["trace[524765721] 'agreement among raft nodes before linearized reading'  (duration: 186.735141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:45:24.375365Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2009}
	{"level":"info","ts":"2024-09-12T21:45:24.399116Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2009,"took":"22.916568ms","hash":2986062905,"current-db-size-bytes":6709248,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":5165056,"current-db-size-in-use":"5.2 MB"}
	{"level":"info","ts":"2024-09-12T21:45:24.399179Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2986062905,"revision":2009,"compact-revision":1527}
	
	
	==> gcp-auth [224662c30f37670f4f61f36221a15bb4d6847d38fcb6a9be3d38b6b08f1d6765] <==
	2024/09/12 21:32:07 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:10 Ready to marshal response ...
	2024/09/12 21:40:10 Ready to write response ...
	2024/09/12 21:40:13 Ready to marshal response ...
	2024/09/12 21:40:13 Ready to write response ...
	2024/09/12 21:40:14 Ready to marshal response ...
	2024/09/12 21:40:14 Ready to write response ...
	2024/09/12 21:40:20 Ready to marshal response ...
	2024/09/12 21:40:20 Ready to write response ...
	2024/09/12 21:40:28 Ready to marshal response ...
	2024/09/12 21:40:28 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:33 Ready to marshal response ...
	2024/09/12 21:40:33 Ready to write response ...
	2024/09/12 21:40:36 Ready to marshal response ...
	2024/09/12 21:40:36 Ready to write response ...
	2024/09/12 21:41:13 Ready to marshal response ...
	2024/09/12 21:41:13 Ready to write response ...
	2024/09/12 21:43:32 Ready to marshal response ...
	2024/09/12 21:43:32 Ready to write response ...
	
	
	==> kernel <==
	 21:46:49 up 16 min,  0 users,  load average: 0.13, 0.28, 0.31
	Linux addons-694635 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04006273204a6b5b2c2c50eb039597ab1cad77b9f65e3cdcf9ad2cd2bff6a600] <==
	E0912 21:32:39.672883       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.685803       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	E0912 21:32:39.712873       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.168.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.168.73:443: connect: connection refused" logger="UnhandledError"
	I0912 21:32:39.805428       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0912 21:40:26.205439       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 21:40:33.330977       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.67.92"}
	E0912 21:40:44.623623       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0912 21:40:56.039633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.039692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.072862       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.072917       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.085872       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.085946       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.110100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.110148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:40:56.134562       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:40:56.134998       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:40:57.111095       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:40:57.135378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 21:40:57.232817       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0912 21:41:09.586605       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:41:10.631785       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:41:13.128356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:41:13.316851       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.65.172"}
	I0912 21:43:32.312526       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.215.31"}
	
	
	==> kube-controller-manager [5c2e331dbfeadd5401ab6aa1159f9097e7db3bf727f83963a786e4a149b7c5ba] <==
	W0912 21:44:37.798328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:44:37.798646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:44:45.530689       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:44:45.530769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:02.336152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:02.336300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:04.788209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:04.788321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:19.514648       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:19.514825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:38.981664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:38.981842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:52.847331       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:52.847397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:45:55.689555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:45:55.689609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:46:01.939041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:46:01.939157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:46:22.284565       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:46:22.284772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:46:38.748045       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:46:38.748299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:46:48.196749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="21.511µs"
	W0912 21:46:49.360858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:46:49.360955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [aa4b1b8007598386d5052a12803d3a47809e7be17f0613791526a0fb975078f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:30:36.071774       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:30:36.082467       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0912 21:30:36.082639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:30:36.149367       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:30:36.149399       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:30:36.149432       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:30:36.161798       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:30:36.164947       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:30:36.164965       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:30:36.177240       1 config.go:199] "Starting service config controller"
	I0912 21:30:36.177256       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:30:36.177281       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:30:36.177291       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:30:36.180184       1 config.go:328] "Starting node config controller"
	I0912 21:30:36.180198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:30:36.277929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:30:36.278089       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:30:36.286430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [daff578fb9bc43cd709b1e387f2aa19b6c69701a055733a1e7c09f5d3c4ae546] <==
	W0912 21:30:25.943462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:30:25.943544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:25.943641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:30:25.943723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.867246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:30:26.867357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.882410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:30:26.882590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.937816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:30:26.937964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:26.988234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:26.988387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.028755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:30:27.028982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.065104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:30:27.065402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.081373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.081599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.089933       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:30:27.090023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:30:27.106816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.106970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:27.187917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:27.188172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 21:30:29.715653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:46:09 addons-694635 kubelet[1201]: E0912 21:46:09.347329    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177569346652533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:09 addons-694635 kubelet[1201]: E0912 21:46:09.347781    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177569346652533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:15 addons-694635 kubelet[1201]: E0912 21:46:15.644636    1201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f"
	Sep 12 21:46:19 addons-694635 kubelet[1201]: E0912 21:46:19.350705    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177579350228447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:19 addons-694635 kubelet[1201]: E0912 21:46:19.351010    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177579350228447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:28 addons-694635 kubelet[1201]: E0912 21:46:28.657046    1201 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 21:46:28 addons-694635 kubelet[1201]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 21:46:28 addons-694635 kubelet[1201]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 21:46:28 addons-694635 kubelet[1201]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 21:46:28 addons-694635 kubelet[1201]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 21:46:29 addons-694635 kubelet[1201]: E0912 21:46:29.353991    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177589353424777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:29 addons-694635 kubelet[1201]: E0912 21:46:29.354043    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177589353424777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:30 addons-694635 kubelet[1201]: E0912 21:46:30.646955    1201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f"
	Sep 12 21:46:39 addons-694635 kubelet[1201]: E0912 21:46:39.356559    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177599355957765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:39 addons-694635 kubelet[1201]: E0912 21:46:39.356944    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177599355957765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:42 addons-694635 kubelet[1201]: E0912 21:46:42.644735    1201 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9b902b9-bf7a-4ee9-8a7f-6a52a67a2b2f"
	Sep 12 21:46:48 addons-694635 kubelet[1201]: I0912 21:46:48.227600    1201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-8wzs4" podStartSLOduration=193.9699436 podStartE2EDuration="3m16.227567635s" podCreationTimestamp="2024-09-12 21:43:32 +0000 UTC" firstStartedPulling="2024-09-12 21:43:32.700045316 +0000 UTC m=+784.186252149" lastFinishedPulling="2024-09-12 21:43:34.957669351 +0000 UTC m=+786.443876184" observedRunningTime="2024-09-12 21:43:35.327698459 +0000 UTC m=+786.813905311" watchObservedRunningTime="2024-09-12 21:46:48.227567635 +0000 UTC m=+979.713774486"
	Sep 12 21:46:49 addons-694635 kubelet[1201]: E0912 21:46:49.361145    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609359783798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:49 addons-694635 kubelet[1201]: E0912 21:46:49.361173    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726177609359783798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580233,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.585022    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-tmp-dir\") pod \"4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691\" (UID: \"4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691\") "
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.585081    1201 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdm4\" (UniqueName: \"kubernetes.io/projected/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-kube-api-access-rhdm4\") pod \"4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691\" (UID: \"4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691\") "
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.585966    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691" (UID: "4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.589134    1201 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-kube-api-access-rhdm4" (OuterVolumeSpecName: "kube-api-access-rhdm4") pod "4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691" (UID: "4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691"). InnerVolumeSpecName "kube-api-access-rhdm4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.685929    1201 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rhdm4\" (UniqueName: \"kubernetes.io/projected/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-kube-api-access-rhdm4\") on node \"addons-694635\" DevicePath \"\""
	Sep 12 21:46:49 addons-694635 kubelet[1201]: I0912 21:46:49.685974    1201 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4922d6c5-c4bb-4ec8-a21f-2ca9ba3c4691-tmp-dir\") on node \"addons-694635\" DevicePath \"\""
	
	
	==> storage-provisioner [c63491974a86dd1007fc9980bfe0086d0dc3bf4ff8c0c3f310a5cb87fbb4ac38] <==
	I0912 21:30:40.634278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:30:40.654230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:30:40.654289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:30:40.672312       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:30:40.672455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	I0912 21:30:40.672557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad54721b-5319-42a0-af50-593f2d28e853", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2 became leader
	I0912 21:30:40.772629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-694635_0129df7b-bc38-4de1-88d1-b14901b396c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-694635 -n addons-694635
helpers_test.go:261: (dbg) Run:  kubectl --context addons-694635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-694635 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-694635 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-694635/192.168.39.67
	Start Time:       Thu, 12 Sep 2024 21:32:07 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9mw2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c9mw2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-694635
	  Normal   Pulling    13m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m38s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (346.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 node stop m02 -v=7 --alsologtostderr
E0912 22:00:26.201539   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:46.682868   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:01:27.644620   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:02:07.199132   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.474456327s)

                                                
                                                
-- stdout --
	* Stopping node "ha-475401-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:00:25.645397   29700 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:00:25.645720   29700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:00:25.645731   29700 out.go:358] Setting ErrFile to fd 2...
	I0912 22:00:25.645735   29700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:00:25.645911   29700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:00:25.646160   29700 mustload.go:65] Loading cluster: ha-475401
	I0912 22:00:25.646537   29700 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:00:25.646551   29700 stop.go:39] StopHost: ha-475401-m02
	I0912 22:00:25.646894   29700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:00:25.646932   29700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:00:25.662834   29700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0912 22:00:25.663287   29700 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:00:25.663873   29700 main.go:141] libmachine: Using API Version  1
	I0912 22:00:25.663902   29700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:00:25.664246   29700 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:00:25.666612   29700 out.go:177] * Stopping node "ha-475401-m02"  ...
	I0912 22:00:25.667671   29700 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:00:25.667711   29700 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:00:25.667990   29700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:00:25.668036   29700 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:00:25.671706   29700 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:00:25.672186   29700 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:00:25.672228   29700 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:00:25.672421   29700 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:00:25.672621   29700 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:00:25.672793   29700 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:00:25.672950   29700 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 22:00:25.762016   29700 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:00:25.815018   29700 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:00:25.869235   29700 main.go:141] libmachine: Stopping "ha-475401-m02"...
	I0912 22:00:25.869284   29700 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:00:25.870810   29700 main.go:141] libmachine: (ha-475401-m02) Calling .Stop
	I0912 22:00:25.874971   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 0/120
	I0912 22:00:26.876535   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 1/120
	I0912 22:00:27.878439   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 2/120
	I0912 22:00:28.880262   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 3/120
	I0912 22:00:29.881821   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 4/120
	I0912 22:00:30.883895   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 5/120
	I0912 22:00:31.885356   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 6/120
	I0912 22:00:32.886887   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 7/120
	I0912 22:00:33.888425   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 8/120
	I0912 22:00:34.890020   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 9/120
	I0912 22:00:35.892047   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 10/120
	I0912 22:00:36.893756   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 11/120
	I0912 22:00:37.895321   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 12/120
	I0912 22:00:38.896809   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 13/120
	I0912 22:00:39.898201   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 14/120
	I0912 22:00:40.900205   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 15/120
	I0912 22:00:41.901587   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 16/120
	I0912 22:00:42.902981   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 17/120
	I0912 22:00:43.904214   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 18/120
	I0912 22:00:44.905718   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 19/120
	I0912 22:00:45.907871   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 20/120
	I0912 22:00:46.909339   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 21/120
	I0912 22:00:47.910743   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 22/120
	I0912 22:00:48.912787   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 23/120
	I0912 22:00:49.914134   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 24/120
	I0912 22:00:50.915714   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 25/120
	I0912 22:00:51.917303   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 26/120
	I0912 22:00:52.918718   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 27/120
	I0912 22:00:53.920190   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 28/120
	I0912 22:00:54.921505   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 29/120
	I0912 22:00:55.923617   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 30/120
	I0912 22:00:56.925268   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 31/120
	I0912 22:00:57.926904   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 32/120
	I0912 22:00:58.928147   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 33/120
	I0912 22:00:59.929451   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 34/120
	I0912 22:01:00.931666   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 35/120
	I0912 22:01:01.934157   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 36/120
	I0912 22:01:02.936424   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 37/120
	I0912 22:01:03.937831   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 38/120
	I0912 22:01:04.940403   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 39/120
	I0912 22:01:05.942264   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 40/120
	I0912 22:01:06.943561   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 41/120
	I0912 22:01:07.944900   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 42/120
	I0912 22:01:08.946298   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 43/120
	I0912 22:01:09.948165   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 44/120
	I0912 22:01:10.950566   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 45/120
	I0912 22:01:11.952807   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 46/120
	I0912 22:01:12.954433   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 47/120
	I0912 22:01:13.956563   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 48/120
	I0912 22:01:14.958676   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 49/120
	I0912 22:01:15.960478   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 50/120
	I0912 22:01:16.962270   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 51/120
	I0912 22:01:17.964285   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 52/120
	I0912 22:01:18.966271   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 53/120
	I0912 22:01:19.968104   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 54/120
	I0912 22:01:20.970322   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 55/120
	I0912 22:01:21.971717   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 56/120
	I0912 22:01:22.973160   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 57/120
	I0912 22:01:23.974820   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 58/120
	I0912 22:01:24.977097   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 59/120
	I0912 22:01:25.979100   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 60/120
	I0912 22:01:26.980611   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 61/120
	I0912 22:01:27.982089   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 62/120
	I0912 22:01:28.983555   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 63/120
	I0912 22:01:29.985803   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 64/120
	I0912 22:01:30.987770   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 65/120
	I0912 22:01:31.989555   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 66/120
	I0912 22:01:32.991396   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 67/120
	I0912 22:01:33.992714   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 68/120
	I0912 22:01:34.994380   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 69/120
	I0912 22:01:35.996112   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 70/120
	I0912 22:01:36.997427   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 71/120
	I0912 22:01:37.998755   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 72/120
	I0912 22:01:39.000371   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 73/120
	I0912 22:01:40.001898   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 74/120
	I0912 22:01:41.003842   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 75/120
	I0912 22:01:42.006313   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 76/120
	I0912 22:01:43.008086   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 77/120
	I0912 22:01:44.009518   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 78/120
	I0912 22:01:45.011831   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 79/120
	I0912 22:01:46.013778   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 80/120
	I0912 22:01:47.015468   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 81/120
	I0912 22:01:48.016980   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 82/120
	I0912 22:01:49.018430   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 83/120
	I0912 22:01:50.019771   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 84/120
	I0912 22:01:51.021239   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 85/120
	I0912 22:01:52.023017   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 86/120
	I0912 22:01:53.024561   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 87/120
	I0912 22:01:54.025897   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 88/120
	I0912 22:01:55.028206   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 89/120
	I0912 22:01:56.030194   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 90/120
	I0912 22:01:57.032011   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 91/120
	I0912 22:01:58.033672   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 92/120
	I0912 22:01:59.035522   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 93/120
	I0912 22:02:00.036968   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 94/120
	I0912 22:02:01.038647   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 95/120
	I0912 22:02:02.039822   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 96/120
	I0912 22:02:03.041121   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 97/120
	I0912 22:02:04.042439   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 98/120
	I0912 22:02:05.043786   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 99/120
	I0912 22:02:06.045198   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 100/120
	I0912 22:02:07.046651   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 101/120
	I0912 22:02:08.048099   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 102/120
	I0912 22:02:09.049638   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 103/120
	I0912 22:02:10.051178   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 104/120
	I0912 22:02:11.052844   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 105/120
	I0912 22:02:12.054445   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 106/120
	I0912 22:02:13.055860   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 107/120
	I0912 22:02:14.057540   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 108/120
	I0912 22:02:15.058880   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 109/120
	I0912 22:02:16.061045   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 110/120
	I0912 22:02:17.063040   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 111/120
	I0912 22:02:18.064266   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 112/120
	I0912 22:02:19.065795   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 113/120
	I0912 22:02:20.068295   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 114/120
	I0912 22:02:21.069951   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 115/120
	I0912 22:02:22.072017   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 116/120
	I0912 22:02:23.073344   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 117/120
	I0912 22:02:24.074608   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 118/120
	I0912 22:02:25.076219   29700 main.go:141] libmachine: (ha-475401-m02) Waiting for machine to stop 119/120
	I0912 22:02:26.077786   29700 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:02:26.077985   29700 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-475401 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (19.021282348s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:26.123008   30130 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:02:26.123219   30130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:26.123227   30130 out.go:358] Setting ErrFile to fd 2...
	I0912 22:02:26.123231   30130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:26.123425   30130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:02:26.123580   30130 out.go:352] Setting JSON to false
	I0912 22:02:26.123608   30130 mustload.go:65] Loading cluster: ha-475401
	I0912 22:02:26.123713   30130 notify.go:220] Checking for updates...
	I0912 22:02:26.123965   30130 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:02:26.123978   30130 status.go:255] checking status of ha-475401 ...
	I0912 22:02:26.124318   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.124363   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.139627   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0912 22:02:26.140037   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.140535   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.140570   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.140924   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.141130   30130 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:02:26.142612   30130 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:02:26.142630   30130 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:26.142940   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.142977   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.157410   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0912 22:02:26.157845   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.158295   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.158316   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.158574   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.158730   30130 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:02:26.161424   30130 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:26.161931   30130 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:26.161959   30130 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:26.162116   30130 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:26.162502   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.162544   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.177076   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0912 22:02:26.177525   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.178061   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.178082   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.178452   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.178647   30130 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:02:26.178881   30130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:26.178909   30130 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:02:26.181910   30130 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:26.182460   30130 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:26.182493   30130 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:26.182644   30130 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:02:26.182826   30130 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:02:26.182987   30130 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:02:26.183135   30130 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:02:26.274904   30130 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:26.281287   30130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:26.299306   30130 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:26.299344   30130 api_server.go:166] Checking apiserver status ...
	I0912 22:02:26.299375   30130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:26.316691   30130 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:02:26.328718   30130 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:26.328782   30130 ssh_runner.go:195] Run: ls
	I0912 22:02:26.333001   30130 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:26.339258   30130 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:26.339300   30130 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:02:26.339312   30130 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:26.339328   30130 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:02:26.339619   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.339650   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.354339   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0912 22:02:26.354728   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.355181   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.355200   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.355491   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.355671   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:02:26.357216   30130 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:02:26.357230   30130 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:26.357520   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.357567   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.372085   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0912 22:02:26.372468   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.372976   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.373000   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.373251   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.373469   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:02:26.376001   30130 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:26.376490   30130 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:26.376516   30130 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:26.376641   30130 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:26.376962   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:26.376997   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:26.392120   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0912 22:02:26.392483   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:26.392940   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:26.392960   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:26.393261   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:26.393437   30130 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:02:26.393601   30130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:26.393628   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:02:26.396325   30130 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:26.396762   30130 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:26.396800   30130 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:26.396909   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:02:26.397055   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:02:26.397243   30130 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:02:26.397400   30130 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:02:44.729806   30130 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:02:44.729897   30130 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:02:44.729923   30130 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:44.729931   30130 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:02:44.729947   30130 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:44.729954   30130 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:02:44.730238   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.730273   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.744901   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0912 22:02:44.745349   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.745850   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.745866   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.746147   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.746323   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:02:44.747841   30130 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:02:44.747855   30130 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:44.748155   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.748189   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.762724   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0912 22:02:44.763142   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.763715   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.763732   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.764086   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.764294   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:02:44.767482   30130 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:44.767936   30130 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:44.767982   30130 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:44.768153   30130 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:44.768503   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.768541   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.783767   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0912 22:02:44.784222   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.784693   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.784720   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.785066   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.785238   30130 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:02:44.785414   30130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:44.785433   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:02:44.788046   30130 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:44.788468   30130 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:44.788493   30130 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:44.788658   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:02:44.788832   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:02:44.788976   30130 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:02:44.789102   30130 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:02:44.874647   30130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:44.891542   30130 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:44.891576   30130 api_server.go:166] Checking apiserver status ...
	I0912 22:02:44.891613   30130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:44.906813   30130 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:02:44.918239   30130 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:44.918315   30130 ssh_runner.go:195] Run: ls
	I0912 22:02:44.922834   30130 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:44.927003   30130 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:44.927031   30130 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:02:44.927043   30130 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:44.927058   30130 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:02:44.927439   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.927481   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.943316   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I0912 22:02:44.943764   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.944327   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.944368   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.944674   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.944899   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:02:44.947012   30130 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:02:44.947051   30130 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:44.947423   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.947462   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.964104   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0912 22:02:44.964795   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.965397   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.965419   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.965849   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.966073   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:02:44.968849   30130 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:44.969324   30130 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:44.969355   30130 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:44.969515   30130 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:44.969944   30130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:44.969994   30130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:44.986237   30130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0912 22:02:44.986618   30130 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:44.987076   30130 main.go:141] libmachine: Using API Version  1
	I0912 22:02:44.987102   30130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:44.987407   30130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:44.987582   30130 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:02:44.987806   30130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:44.987825   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:02:44.990816   30130 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:44.991275   30130 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:44.991335   30130 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:44.991475   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:02:44.991668   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:02:44.991861   30130 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:02:44.992025   30130 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:02:45.082344   30130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:45.098121   30130 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-475401 -n ha-475401
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-475401 logs -n 25: (1.399239816s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m03_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m04 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp testdata/cp-test.txt                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m04_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03:/home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m03 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-475401 node stop m02 -v=7                                                     | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:55:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:55:55.426662   25697 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:55:55.426769   25697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:55.426777   25697 out.go:358] Setting ErrFile to fd 2...
	I0912 21:55:55.426782   25697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:55.426970   25697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:55:55.427570   25697 out.go:352] Setting JSON to false
	I0912 21:55:55.428381   25697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2297,"bootTime":1726175858,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:55:55.428435   25697 start.go:139] virtualization: kvm guest
	I0912 21:55:55.430362   25697 out.go:177] * [ha-475401] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:55:55.431727   25697 notify.go:220] Checking for updates...
	I0912 21:55:55.431746   25697 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:55:55.433411   25697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:55:55.434746   25697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:55:55.435913   25697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.437185   25697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:55:55.438546   25697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:55:55.439941   25697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:55:55.474955   25697 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:55:55.475932   25697 start.go:297] selected driver: kvm2
	I0912 21:55:55.475950   25697 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:55:55.475961   25697 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:55:55.476675   25697 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:55:55.476754   25697 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:55:55.491945   25697 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:55:55.491990   25697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:55:55.492245   25697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:55:55.492299   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:55:55.492310   25697 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0912 21:55:55.492317   25697 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 21:55:55.492370   25697 start.go:340] cluster config:
	{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0912 21:55:55.492458   25697 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:55:55.494210   25697 out.go:177] * Starting "ha-475401" primary control-plane node in "ha-475401" cluster
	I0912 21:55:55.495388   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:55:55.495421   25697 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:55:55.495430   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:55:55.495538   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:55:55.495551   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:55:55.495841   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:55:55.495861   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json: {Name:mk01f80c972669e9d15ecf56763c72c858d056e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:55:55.496014   25697 start.go:360] acquireMachinesLock for ha-475401: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:55:55.496047   25697 start.go:364] duration metric: took 18.665µs to acquireMachinesLock for "ha-475401"
	I0912 21:55:55.496069   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:55:55.496154   25697 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:55:55.497510   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:55:55.497690   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:55.497732   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:55.512119   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0912 21:55:55.512575   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:55.513086   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:55:55.513105   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:55.513393   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:55.513561   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:55:55.513730   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:55:55.513887   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:55:55.513916   25697 client.go:168] LocalClient.Create starting
	I0912 21:55:55.513951   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:55:55.513981   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:55:55.513996   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:55:55.514051   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:55:55.514068   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:55:55.514083   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:55:55.514102   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:55:55.514110   25697 main.go:141] libmachine: (ha-475401) Calling .PreCreateCheck
	I0912 21:55:55.514450   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:55:55.514824   25697 main.go:141] libmachine: Creating machine...
	I0912 21:55:55.514837   25697 main.go:141] libmachine: (ha-475401) Calling .Create
	I0912 21:55:55.514977   25697 main.go:141] libmachine: (ha-475401) Creating KVM machine...
	I0912 21:55:55.516343   25697 main.go:141] libmachine: (ha-475401) DBG | found existing default KVM network
	I0912 21:55:55.517067   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.516928   25720 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0912 21:55:55.517112   25697 main.go:141] libmachine: (ha-475401) DBG | created network xml: 
	I0912 21:55:55.517136   25697 main.go:141] libmachine: (ha-475401) DBG | <network>
	I0912 21:55:55.517146   25697 main.go:141] libmachine: (ha-475401) DBG |   <name>mk-ha-475401</name>
	I0912 21:55:55.517152   25697 main.go:141] libmachine: (ha-475401) DBG |   <dns enable='no'/>
	I0912 21:55:55.517160   25697 main.go:141] libmachine: (ha-475401) DBG |   
	I0912 21:55:55.517176   25697 main.go:141] libmachine: (ha-475401) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:55:55.517187   25697 main.go:141] libmachine: (ha-475401) DBG |     <dhcp>
	I0912 21:55:55.517195   25697 main.go:141] libmachine: (ha-475401) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:55:55.517206   25697 main.go:141] libmachine: (ha-475401) DBG |     </dhcp>
	I0912 21:55:55.517213   25697 main.go:141] libmachine: (ha-475401) DBG |   </ip>
	I0912 21:55:55.517223   25697 main.go:141] libmachine: (ha-475401) DBG |   
	I0912 21:55:55.517231   25697 main.go:141] libmachine: (ha-475401) DBG | </network>
	I0912 21:55:55.517244   25697 main.go:141] libmachine: (ha-475401) DBG | 
	I0912 21:55:55.522134   25697 main.go:141] libmachine: (ha-475401) DBG | trying to create private KVM network mk-ha-475401 192.168.39.0/24...
	I0912 21:55:55.589414   25697 main.go:141] libmachine: (ha-475401) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 ...
	I0912 21:55:55.589450   25697 main.go:141] libmachine: (ha-475401) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:55:55.589460   25697 main.go:141] libmachine: (ha-475401) DBG | private KVM network mk-ha-475401 192.168.39.0/24 created
	I0912 21:55:55.589474   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.589377   25720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.589532   25697 main.go:141] libmachine: (ha-475401) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:55:55.831888   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.831762   25720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa...
	I0912 21:55:55.895303   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.895144   25720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/ha-475401.rawdisk...
	I0912 21:55:55.895341   25697 main.go:141] libmachine: (ha-475401) DBG | Writing magic tar header
	I0912 21:55:55.895355   25697 main.go:141] libmachine: (ha-475401) DBG | Writing SSH key tar header
	I0912 21:55:55.895380   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.895305   25720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 ...
	I0912 21:55:55.895481   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401
	I0912 21:55:55.895501   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 (perms=drwx------)
	I0912 21:55:55.895511   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:55:55.895525   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.895535   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:55:55.895546   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:55:55.895565   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:55:55.895572   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:55:55.895580   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:55:55.895603   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:55:55.895612   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:55:55.895623   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:55:55.895632   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home
	I0912 21:55:55.895643   25697 main.go:141] libmachine: (ha-475401) DBG | Skipping /home - not owner
	I0912 21:55:55.895658   25697 main.go:141] libmachine: (ha-475401) Creating domain...
	I0912 21:55:55.896804   25697 main.go:141] libmachine: (ha-475401) define libvirt domain using xml: 
	I0912 21:55:55.896825   25697 main.go:141] libmachine: (ha-475401) <domain type='kvm'>
	I0912 21:55:55.896831   25697 main.go:141] libmachine: (ha-475401)   <name>ha-475401</name>
	I0912 21:55:55.896836   25697 main.go:141] libmachine: (ha-475401)   <memory unit='MiB'>2200</memory>
	I0912 21:55:55.896841   25697 main.go:141] libmachine: (ha-475401)   <vcpu>2</vcpu>
	I0912 21:55:55.896845   25697 main.go:141] libmachine: (ha-475401)   <features>
	I0912 21:55:55.896850   25697 main.go:141] libmachine: (ha-475401)     <acpi/>
	I0912 21:55:55.896858   25697 main.go:141] libmachine: (ha-475401)     <apic/>
	I0912 21:55:55.896866   25697 main.go:141] libmachine: (ha-475401)     <pae/>
	I0912 21:55:55.896880   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.896892   25697 main.go:141] libmachine: (ha-475401)   </features>
	I0912 21:55:55.896898   25697 main.go:141] libmachine: (ha-475401)   <cpu mode='host-passthrough'>
	I0912 21:55:55.896904   25697 main.go:141] libmachine: (ha-475401)   
	I0912 21:55:55.896908   25697 main.go:141] libmachine: (ha-475401)   </cpu>
	I0912 21:55:55.896916   25697 main.go:141] libmachine: (ha-475401)   <os>
	I0912 21:55:55.896920   25697 main.go:141] libmachine: (ha-475401)     <type>hvm</type>
	I0912 21:55:55.896925   25697 main.go:141] libmachine: (ha-475401)     <boot dev='cdrom'/>
	I0912 21:55:55.896932   25697 main.go:141] libmachine: (ha-475401)     <boot dev='hd'/>
	I0912 21:55:55.896937   25697 main.go:141] libmachine: (ha-475401)     <bootmenu enable='no'/>
	I0912 21:55:55.896941   25697 main.go:141] libmachine: (ha-475401)   </os>
	I0912 21:55:55.896947   25697 main.go:141] libmachine: (ha-475401)   <devices>
	I0912 21:55:55.896955   25697 main.go:141] libmachine: (ha-475401)     <disk type='file' device='cdrom'>
	I0912 21:55:55.896972   25697 main.go:141] libmachine: (ha-475401)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/boot2docker.iso'/>
	I0912 21:55:55.896983   25697 main.go:141] libmachine: (ha-475401)       <target dev='hdc' bus='scsi'/>
	I0912 21:55:55.896992   25697 main.go:141] libmachine: (ha-475401)       <readonly/>
	I0912 21:55:55.897002   25697 main.go:141] libmachine: (ha-475401)     </disk>
	I0912 21:55:55.897011   25697 main.go:141] libmachine: (ha-475401)     <disk type='file' device='disk'>
	I0912 21:55:55.897027   25697 main.go:141] libmachine: (ha-475401)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:55:55.897037   25697 main.go:141] libmachine: (ha-475401)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/ha-475401.rawdisk'/>
	I0912 21:55:55.897045   25697 main.go:141] libmachine: (ha-475401)       <target dev='hda' bus='virtio'/>
	I0912 21:55:55.897050   25697 main.go:141] libmachine: (ha-475401)     </disk>
	I0912 21:55:55.897067   25697 main.go:141] libmachine: (ha-475401)     <interface type='network'>
	I0912 21:55:55.897081   25697 main.go:141] libmachine: (ha-475401)       <source network='mk-ha-475401'/>
	I0912 21:55:55.897092   25697 main.go:141] libmachine: (ha-475401)       <model type='virtio'/>
	I0912 21:55:55.897115   25697 main.go:141] libmachine: (ha-475401)     </interface>
	I0912 21:55:55.897133   25697 main.go:141] libmachine: (ha-475401)     <interface type='network'>
	I0912 21:55:55.897140   25697 main.go:141] libmachine: (ha-475401)       <source network='default'/>
	I0912 21:55:55.897151   25697 main.go:141] libmachine: (ha-475401)       <model type='virtio'/>
	I0912 21:55:55.897157   25697 main.go:141] libmachine: (ha-475401)     </interface>
	I0912 21:55:55.897165   25697 main.go:141] libmachine: (ha-475401)     <serial type='pty'>
	I0912 21:55:55.897171   25697 main.go:141] libmachine: (ha-475401)       <target port='0'/>
	I0912 21:55:55.897179   25697 main.go:141] libmachine: (ha-475401)     </serial>
	I0912 21:55:55.897184   25697 main.go:141] libmachine: (ha-475401)     <console type='pty'>
	I0912 21:55:55.897195   25697 main.go:141] libmachine: (ha-475401)       <target type='serial' port='0'/>
	I0912 21:55:55.897206   25697 main.go:141] libmachine: (ha-475401)     </console>
	I0912 21:55:55.897213   25697 main.go:141] libmachine: (ha-475401)     <rng model='virtio'>
	I0912 21:55:55.897219   25697 main.go:141] libmachine: (ha-475401)       <backend model='random'>/dev/random</backend>
	I0912 21:55:55.897226   25697 main.go:141] libmachine: (ha-475401)     </rng>
	I0912 21:55:55.897231   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.897237   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.897243   25697 main.go:141] libmachine: (ha-475401)   </devices>
	I0912 21:55:55.897249   25697 main.go:141] libmachine: (ha-475401) </domain>
	I0912 21:55:55.897256   25697 main.go:141] libmachine: (ha-475401) 
	I0912 21:55:55.901827   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:f0:76:08 in network default
	I0912 21:55:55.902319   25697 main.go:141] libmachine: (ha-475401) Ensuring networks are active...
	I0912 21:55:55.902338   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:55.902959   25697 main.go:141] libmachine: (ha-475401) Ensuring network default is active
	I0912 21:55:55.903259   25697 main.go:141] libmachine: (ha-475401) Ensuring network mk-ha-475401 is active
	I0912 21:55:55.903720   25697 main.go:141] libmachine: (ha-475401) Getting domain xml...
	I0912 21:55:55.904332   25697 main.go:141] libmachine: (ha-475401) Creating domain...
	I0912 21:55:57.113524   25697 main.go:141] libmachine: (ha-475401) Waiting to get IP...
	I0912 21:55:57.114495   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.114873   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.114899   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.114856   25720 retry.go:31] will retry after 262.380002ms: waiting for machine to come up
	I0912 21:55:57.379331   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.379828   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.379851   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.379794   25720 retry.go:31] will retry after 279.039082ms: waiting for machine to come up
	I0912 21:55:57.660446   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.660904   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.660932   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.660865   25720 retry.go:31] will retry after 433.166056ms: waiting for machine to come up
	I0912 21:55:58.095500   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:58.096032   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:58.096053   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:58.095974   25720 retry.go:31] will retry after 436.676456ms: waiting for machine to come up
	I0912 21:55:58.534685   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:58.535180   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:58.535217   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:58.535154   25720 retry.go:31] will retry after 488.410112ms: waiting for machine to come up
	I0912 21:55:59.024853   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:59.025250   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:59.025278   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:59.025201   25720 retry.go:31] will retry after 730.821904ms: waiting for machine to come up
	I0912 21:55:59.757171   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:59.757596   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:59.757650   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:59.757550   25720 retry.go:31] will retry after 816.928099ms: waiting for machine to come up
	I0912 21:56:00.576021   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:00.576382   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:00.576407   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:00.576341   25720 retry.go:31] will retry after 1.205724317s: waiting for machine to come up
	I0912 21:56:01.783914   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:01.784370   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:01.784396   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:01.784312   25720 retry.go:31] will retry after 1.666135319s: waiting for machine to come up
	I0912 21:56:03.451854   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:03.452343   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:03.452370   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:03.452304   25720 retry.go:31] will retry after 1.710937917s: waiting for machine to come up
	I0912 21:56:05.165203   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:05.165667   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:05.165694   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:05.165603   25720 retry.go:31] will retry after 2.153375797s: waiting for machine to come up
	I0912 21:56:07.321799   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:07.322124   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:07.322164   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:07.322099   25720 retry.go:31] will retry after 2.592804257s: waiting for machine to come up
	I0912 21:56:09.916015   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:09.916387   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:09.916418   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:09.916343   25720 retry.go:31] will retry after 3.777795698s: waiting for machine to come up
	I0912 21:56:13.695241   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:13.695702   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:13.695725   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:13.695621   25720 retry.go:31] will retry after 3.991415039s: waiting for machine to come up
	I0912 21:56:17.689719   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.690320   25697 main.go:141] libmachine: (ha-475401) Found IP for machine: 192.168.39.203
	I0912 21:56:17.690341   25697 main.go:141] libmachine: (ha-475401) Reserving static IP address...
	I0912 21:56:17.690355   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has current primary IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.690646   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find host DHCP lease matching {name: "ha-475401", mac: "52:54:00:b0:0e:dd", ip: "192.168.39.203"} in network mk-ha-475401
	I0912 21:56:17.761650   25697 main.go:141] libmachine: (ha-475401) DBG | Getting to WaitForSSH function...
	I0912 21:56:17.761681   25697 main.go:141] libmachine: (ha-475401) Reserved static IP address: 192.168.39.203
	I0912 21:56:17.761695   25697 main.go:141] libmachine: (ha-475401) Waiting for SSH to be available...
	I0912 21:56:17.764659   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.765119   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:17.765151   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.765242   25697 main.go:141] libmachine: (ha-475401) DBG | Using SSH client type: external
	I0912 21:56:17.765270   25697 main.go:141] libmachine: (ha-475401) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa (-rw-------)
	I0912 21:56:17.765295   25697 main.go:141] libmachine: (ha-475401) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:56:17.765307   25697 main.go:141] libmachine: (ha-475401) DBG | About to run SSH command:
	I0912 21:56:17.765319   25697 main.go:141] libmachine: (ha-475401) DBG | exit 0
	I0912 21:56:17.889898   25697 main.go:141] libmachine: (ha-475401) DBG | SSH cmd err, output: <nil>: 
	I0912 21:56:17.890164   25697 main.go:141] libmachine: (ha-475401) KVM machine creation complete!
	I0912 21:56:17.890622   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:56:17.891193   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:17.891397   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:17.891566   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:56:17.891581   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:17.893036   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:56:17.893063   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:56:17.893070   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:56:17.893080   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:17.895504   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.895860   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:17.895890   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.896007   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:17.896183   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:17.896339   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:17.896572   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:17.896748   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:17.896959   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:17.896973   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:56:18.004899   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:56:18.004923   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:56:18.004931   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.008130   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.008539   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.008568   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.008798   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.009029   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.009242   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.009355   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.009569   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.009861   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.009880   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:56:18.118097   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:56:18.118211   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:56:18.118226   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:56:18.118236   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.118521   25697 buildroot.go:166] provisioning hostname "ha-475401"
	I0912 21:56:18.118548   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.118769   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.121122   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.121476   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.121505   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.121660   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.121818   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.121975   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.122088   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.122256   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.122463   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.122476   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401 && echo "ha-475401" | sudo tee /etc/hostname
	I0912 21:56:18.248698   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 21:56:18.248725   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.251454   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.251765   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.251786   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.251973   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.252154   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.252329   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.252497   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.252644   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.252816   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.252832   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:56:18.369721   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:56:18.369756   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:56:18.369794   25697 buildroot.go:174] setting up certificates
	I0912 21:56:18.369805   25697 provision.go:84] configureAuth start
	I0912 21:56:18.369816   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.370109   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:18.372804   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.373272   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.373303   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.373416   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.377282   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.377764   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.377795   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.378056   25697 provision.go:143] copyHostCerts
	I0912 21:56:18.378090   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:56:18.378121   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:56:18.378134   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:56:18.378195   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:56:18.378287   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:56:18.378307   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:56:18.378311   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:56:18.378335   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:56:18.378390   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:56:18.378408   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:56:18.378412   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:56:18.378433   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:56:18.378491   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401 san=[127.0.0.1 192.168.39.203 ha-475401 localhost minikube]
	I0912 21:56:18.503588   25697 provision.go:177] copyRemoteCerts
	I0912 21:56:18.503653   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:56:18.503674   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.506606   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.506887   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.506908   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.507126   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.507375   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.507562   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.507700   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:18.591675   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:56:18.591741   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:56:18.614225   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:56:18.614329   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:56:18.636150   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:56:18.636239   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0912 21:56:18.658330   25697 provision.go:87] duration metric: took 288.489963ms to configureAuth
	I0912 21:56:18.658358   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:56:18.658525   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:18.658622   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.661238   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.661570   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.661600   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.661814   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.661997   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.662157   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.662318   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.662477   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.662692   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.662714   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:56:18.884522   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:56:18.884550   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:56:18.884561   25697 main.go:141] libmachine: (ha-475401) Calling .GetURL
	I0912 21:56:18.886145   25697 main.go:141] libmachine: (ha-475401) DBG | Using libvirt version 6000000
	I0912 21:56:18.888482   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.888916   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.888943   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.889114   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:56:18.889135   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:56:18.889152   25697 client.go:171] duration metric: took 23.375217506s to LocalClient.Create
	I0912 21:56:18.889184   25697 start.go:167] duration metric: took 23.375305381s to libmachine.API.Create "ha-475401"
	I0912 21:56:18.889198   25697 start.go:293] postStartSetup for "ha-475401" (driver="kvm2")
	I0912 21:56:18.889212   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:56:18.889234   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:18.889501   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:56:18.889524   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.891848   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.892303   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.892334   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.892459   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.892654   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.892828   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.893112   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:18.979832   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:56:18.983960   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:56:18.983990   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:56:18.984053   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:56:18.984147   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:56:18.984162   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:56:18.984280   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:56:18.993245   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:56:19.016592   25697 start.go:296] duration metric: took 127.381572ms for postStartSetup
	I0912 21:56:19.016651   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:56:19.017231   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:19.020298   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.020704   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.020728   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.020995   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:19.021262   25697 start.go:128] duration metric: took 23.525094952s to createHost
	I0912 21:56:19.021294   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.023952   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.024332   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.024368   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.024520   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.024766   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.024953   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.025124   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.025289   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:19.025497   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:19.025523   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:56:19.138475   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178179.117074189
	
	I0912 21:56:19.138506   25697 fix.go:216] guest clock: 1726178179.117074189
	I0912 21:56:19.138518   25697 fix.go:229] Guest: 2024-09-12 21:56:19.117074189 +0000 UTC Remote: 2024-09-12 21:56:19.021282044 +0000 UTC m=+23.628297545 (delta=95.792145ms)
	I0912 21:56:19.138584   25697 fix.go:200] guest clock delta is within tolerance: 95.792145ms
	I0912 21:56:19.138591   25697 start.go:83] releasing machines lock for "ha-475401", held for 23.642533008s
	I0912 21:56:19.138626   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.138872   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:19.141330   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.141745   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.141768   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.141965   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142451   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142627   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142760   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:56:19.142801   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.142865   25697 ssh_runner.go:195] Run: cat /version.json
	I0912 21:56:19.142887   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.145672   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.145757   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146060   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.146095   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146125   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.146140   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146239   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.146334   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.146421   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.146482   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.146546   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.146618   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.146702   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:19.146771   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:19.255160   25697 ssh_runner.go:195] Run: systemctl --version
	I0912 21:56:19.261110   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:56:19.417919   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:56:19.423883   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:56:19.423963   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:56:19.439312   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:56:19.439340   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:56:19.439413   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:56:19.455027   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:56:19.468362   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:56:19.468439   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:56:19.482395   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:56:19.496342   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:56:19.608169   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:56:19.771980   25697 docker.go:233] disabling docker service ...
	I0912 21:56:19.772052   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:56:19.786300   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:56:19.799329   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:56:19.915146   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:56:20.029709   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:56:20.051008   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:56:20.069222   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:56:20.069292   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.079515   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:56:20.079599   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.089733   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.099928   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.110186   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:56:20.120471   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.130361   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.146228   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.156091   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:56:20.165021   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:56:20.165091   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:56:20.177851   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:56:20.187561   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:56:20.316412   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:56:20.400784   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:56:20.400876   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:56:20.405197   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:56:20.405263   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:56:20.408673   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:56:20.447077   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:56:20.447164   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:56:20.472518   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:56:20.500904   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:56:20.501965   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:20.504348   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:20.504613   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:20.504628   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:20.504808   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:56:20.508675   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:56:20.520883   25697 kubeadm.go:883] updating cluster {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:56:20.521034   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:56:20.521110   25697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:56:20.555262   25697 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:56:20.555337   25697 ssh_runner.go:195] Run: which lz4
	I0912 21:56:20.559092   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0912 21:56:20.559236   25697 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:56:20.563193   25697 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:56:20.563233   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 21:56:21.749388   25697 crio.go:462] duration metric: took 1.190206408s to copy over tarball
	I0912 21:56:21.749464   25697 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:56:23.727146   25697 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977650394s)
	I0912 21:56:23.727182   25697 crio.go:469] duration metric: took 1.97776335s to extract the tarball
	I0912 21:56:23.727190   25697 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:56:23.763611   25697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:56:23.808502   25697 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 21:56:23.808525   25697 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:56:23.808533   25697 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0912 21:56:23.808655   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:56:23.808719   25697 ssh_runner.go:195] Run: crio config
	I0912 21:56:23.850903   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:56:23.850925   25697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 21:56:23.850942   25697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:56:23.850961   25697 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-475401 NodeName:ha-475401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:56:23.851097   25697 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-475401"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:56:23.851120   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:56:23.851178   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:56:23.866202   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:56:23.866308   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:56:23.866360   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:56:23.876752   25697 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:56:23.876825   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0912 21:56:23.886530   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0912 21:56:23.902835   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:56:23.918301   25697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0912 21:56:23.933717   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0912 21:56:23.949114   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:56:23.953193   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:56:23.964866   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:56:24.092552   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:56:24.109922   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.203
	I0912 21:56:24.109947   25697 certs.go:194] generating shared ca certs ...
	I0912 21:56:24.109971   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.110119   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:56:24.110164   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:56:24.110177   25697 certs.go:256] generating profile certs ...
	I0912 21:56:24.110250   25697 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:56:24.110269   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt with IP's: []
	I0912 21:56:24.345938   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt ...
	I0912 21:56:24.345968   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt: {Name:mka6c1e7d6609a21305a0e1773b35c84f55113cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.346132   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key ...
	I0912 21:56:24.346145   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key: {Name:mkf7e34e888e50ca221094327099d20bcce5f94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.346222   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b
	I0912 21:56:24.346237   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.254]
	I0912 21:56:24.417567   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b ...
	I0912 21:56:24.417598   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b: {Name:mke1d5796526bf531600b3509ec05f11a758e66f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.417758   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b ...
	I0912 21:56:24.417772   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b: {Name:mkfd06efc24218b09c0cad8fe026bed479b3b005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.417848   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:56:24.417947   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:56:24.418001   25697 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:56:24.418014   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt with IP's: []
	I0912 21:56:24.507416   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt ...
	I0912 21:56:24.507447   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt: {Name:mk5f451a7b7611f8daf526fb4007a4e6d7d89cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.507614   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key ...
	I0912 21:56:24.507625   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key: {Name:mkd2818606a639c6c5ea27f592bfaf6531f962fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.507694   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:56:24.507712   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:56:24.507723   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:56:24.507750   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:56:24.507769   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:56:24.507783   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:56:24.507795   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:56:24.507807   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:56:24.507867   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:56:24.507902   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:56:24.507912   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:56:24.507939   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:56:24.507962   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:56:24.507985   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:56:24.508022   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:56:24.508047   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.508061   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.508075   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.508688   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:56:24.533883   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:56:24.556190   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:56:24.578546   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:56:24.602390   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 21:56:24.624976   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:56:24.646839   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:56:24.669447   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:56:24.692696   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:56:24.715860   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:56:24.737405   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:56:24.759589   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:56:24.775992   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:56:24.781509   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:56:24.792384   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.796871   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.796939   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.802571   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:56:24.812962   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:56:24.823381   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.827617   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.827679   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.833219   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:56:24.844095   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:56:24.854896   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.859782   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.859834   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.869713   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:56:24.888503   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:56:24.897709   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:56:24.897769   25697 kubeadm.go:392] StartCluster: {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:56:24.897834   25697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:56:24.897904   25697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:56:24.938395   25697 cri.go:89] found id: ""
	I0912 21:56:24.938458   25697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:56:24.948312   25697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:56:24.957952   25697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:56:24.967400   25697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:56:24.967424   25697 kubeadm.go:157] found existing configuration files:
	
	I0912 21:56:24.967528   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:56:24.976891   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:56:24.976944   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:56:24.986394   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:56:24.995316   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:56:24.995386   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:56:25.004432   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:56:25.013241   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:56:25.013297   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:56:25.023452   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:56:25.032567   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:56:25.032619   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:56:25.041550   25697 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:56:25.138604   25697 kubeadm.go:310] W0912 21:56:25.122647     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:56:25.139554   25697 kubeadm.go:310] W0912 21:56:25.124009     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:56:25.242540   25697 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:56:37.159796   25697 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:56:37.159846   25697 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:56:37.159933   25697 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:56:37.160073   25697 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:56:37.160170   25697 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:56:37.160237   25697 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:56:37.161678   25697 out.go:235]   - Generating certificates and keys ...
	I0912 21:56:37.161750   25697 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:56:37.161820   25697 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:56:37.161907   25697 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:56:37.161973   25697 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:56:37.162059   25697 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:56:37.162140   25697 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:56:37.162212   25697 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:56:37.162358   25697 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-475401 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0912 21:56:37.162428   25697 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:56:37.162548   25697 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-475401 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0912 21:56:37.162604   25697 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:56:37.162658   25697 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:56:37.162697   25697 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:56:37.162768   25697 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:56:37.162818   25697 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:56:37.162876   25697 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:56:37.162942   25697 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:56:37.163050   25697 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:56:37.163118   25697 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:56:37.163197   25697 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:56:37.163307   25697 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:56:37.165438   25697 out.go:235]   - Booting up control plane ...
	I0912 21:56:37.165516   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:56:37.165588   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:56:37.165666   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:56:37.165775   25697 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:56:37.165871   25697 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:56:37.165910   25697 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:56:37.166062   25697 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:56:37.166158   25697 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:56:37.166208   25697 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.472964ms
	I0912 21:56:37.166289   25697 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:56:37.166388   25697 kubeadm.go:310] [api-check] The API server is healthy after 6.056268017s
	I0912 21:56:37.166548   25697 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:56:37.166679   25697 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:56:37.166744   25697 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:56:37.166925   25697 kubeadm.go:310] [mark-control-plane] Marking the node ha-475401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:56:37.167014   25697 kubeadm.go:310] [bootstrap-token] Using token: wgjm90.cxyrn1xrd6ja5z7v
	I0912 21:56:37.168265   25697 out.go:235]   - Configuring RBAC rules ...
	I0912 21:56:37.168388   25697 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:56:37.168503   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:56:37.168701   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:56:37.168817   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:56:37.168920   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:56:37.169013   25697 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:56:37.169140   25697 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:56:37.169226   25697 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:56:37.169269   25697 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:56:37.169281   25697 kubeadm.go:310] 
	I0912 21:56:37.169345   25697 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:56:37.169351   25697 kubeadm.go:310] 
	I0912 21:56:37.169454   25697 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:56:37.169463   25697 kubeadm.go:310] 
	I0912 21:56:37.169503   25697 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:56:37.169604   25697 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:56:37.169693   25697 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:56:37.169703   25697 kubeadm.go:310] 
	I0912 21:56:37.169780   25697 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:56:37.169793   25697 kubeadm.go:310] 
	I0912 21:56:37.169863   25697 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:56:37.169872   25697 kubeadm.go:310] 
	I0912 21:56:37.169977   25697 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:56:37.170060   25697 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:56:37.170118   25697 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:56:37.170136   25697 kubeadm.go:310] 
	I0912 21:56:37.170233   25697 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:56:37.170438   25697 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:56:37.170452   25697 kubeadm.go:310] 
	I0912 21:56:37.170549   25697 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wgjm90.cxyrn1xrd6ja5z7v \
	I0912 21:56:37.170669   25697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 21:56:37.170712   25697 kubeadm.go:310] 	--control-plane 
	I0912 21:56:37.170720   25697 kubeadm.go:310] 
	I0912 21:56:37.170789   25697 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:56:37.170795   25697 kubeadm.go:310] 
	I0912 21:56:37.170860   25697 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wgjm90.cxyrn1xrd6ja5z7v \
	I0912 21:56:37.170974   25697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 21:56:37.170991   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:56:37.170996   25697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 21:56:37.172523   25697 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 21:56:37.173662   25697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 21:56:37.180682   25697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0912 21:56:37.180701   25697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0912 21:56:37.198637   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 21:56:37.563600   25697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:56:37.563674   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:37.563687   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401 minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=true
	I0912 21:56:37.709078   25697 ops.go:34] apiserver oom_adj: -16
	I0912 21:56:37.709172   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:38.209596   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:38.709514   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:39.210061   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:39.709424   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:40.209572   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:40.307756   25697 kubeadm.go:1113] duration metric: took 2.744148458s to wait for elevateKubeSystemPrivileges
	I0912 21:56:40.307800   25697 kubeadm.go:394] duration metric: took 15.410033831s to StartCluster
	I0912 21:56:40.307824   25697 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:40.307902   25697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:56:40.308574   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:40.308812   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:56:40.308815   25697 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:56:40.308838   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:56:40.308847   25697 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 21:56:40.308908   25697 addons.go:69] Setting storage-provisioner=true in profile "ha-475401"
	I0912 21:56:40.308919   25697 addons.go:69] Setting default-storageclass=true in profile "ha-475401"
	I0912 21:56:40.308942   25697 addons.go:234] Setting addon storage-provisioner=true in "ha-475401"
	I0912 21:56:40.308950   25697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-475401"
	I0912 21:56:40.308980   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:56:40.309024   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:40.309347   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.309348   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.309388   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.309398   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.325369   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0912 21:56:40.325412   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0912 21:56:40.325882   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.325936   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.326394   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.326414   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.326539   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.326563   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.326693   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.326913   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.327107   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.327279   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.327308   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.329288   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:56:40.329695   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 21:56:40.330205   25697 cert_rotation.go:140] Starting client certificate rotation controller
	I0912 21:56:40.330495   25697 addons.go:234] Setting addon default-storageclass=true in "ha-475401"
	I0912 21:56:40.330548   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:56:40.330917   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.330963   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.345620   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0912 21:56:40.346031   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.346355   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
	I0912 21:56:40.346496   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.346521   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.346859   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.346907   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.347412   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.347427   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.347444   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.347448   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.347819   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.348028   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.349807   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:40.352414   25697 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:56:40.354066   25697 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:56:40.354089   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:56:40.354111   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:40.357110   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.357588   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:40.357632   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.357802   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:40.357974   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:40.358148   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:40.358321   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:40.363190   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I0912 21:56:40.363621   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.364066   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.364081   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.364367   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.364557   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.366030   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:40.366220   25697 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:56:40.366236   25697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:56:40.366259   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:40.368757   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.369258   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:40.369288   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.369464   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:40.369672   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:40.369824   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:40.369975   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:40.418646   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:56:40.504443   25697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:56:40.553155   25697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:56:40.798821   25697 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:56:41.114951   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.114974   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115125   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115147   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115272   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115304   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115313   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115338   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115447   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115471   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115472   25697 main.go:141] libmachine: (ha-475401) DBG | Closing plugin on server side
	I0912 21:56:41.115486   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115495   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115589   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115633   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115695   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115705   25697 main.go:141] libmachine: (ha-475401) DBG | Closing plugin on server side
	I0912 21:56:41.115709   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115729   25697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 21:56:41.115762   25697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 21:56:41.115884   25697 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0912 21:56:41.115896   25697 round_trippers.go:469] Request Headers:
	I0912 21:56:41.115909   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:56:41.115916   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:56:41.130855   25697 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0912 21:56:41.131735   25697 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0912 21:56:41.131766   25697 round_trippers.go:469] Request Headers:
	I0912 21:56:41.131777   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:56:41.131790   25697 round_trippers.go:473]     Content-Type: application/json
	I0912 21:56:41.131795   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:56:41.136103   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:56:41.136234   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.136250   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.136517   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.136538   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.138796   25697 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0912 21:56:41.140368   25697 addons.go:510] duration metric: took 831.516899ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0912 21:56:41.140403   25697 start.go:246] waiting for cluster config update ...
	I0912 21:56:41.140415   25697 start.go:255] writing updated cluster config ...
	I0912 21:56:41.142210   25697 out.go:201] 
	I0912 21:56:41.144842   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:41.144955   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:41.147500   25697 out.go:177] * Starting "ha-475401-m02" control-plane node in "ha-475401" cluster
	I0912 21:56:41.149381   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:56:41.149412   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:56:41.149504   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:56:41.149518   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:56:41.149596   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:41.150076   25697 start.go:360] acquireMachinesLock for ha-475401-m02: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:56:41.150139   25697 start.go:364] duration metric: took 29.116µs to acquireMachinesLock for "ha-475401-m02"
	I0912 21:56:41.150158   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:56:41.150240   25697 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0912 21:56:41.152484   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:56:41.152578   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:41.152601   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:41.168550   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0912 21:56:41.169110   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:41.169745   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:41.169770   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:41.170098   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:41.170301   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:56:41.170467   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:56:41.170676   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:56:41.170699   25697 client.go:168] LocalClient.Create starting
	I0912 21:56:41.170726   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:56:41.170756   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:56:41.170780   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:56:41.170829   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:56:41.170852   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:56:41.170864   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:56:41.170884   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:56:41.170892   25697 main.go:141] libmachine: (ha-475401-m02) Calling .PreCreateCheck
	I0912 21:56:41.171082   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:56:41.171444   25697 main.go:141] libmachine: Creating machine...
	I0912 21:56:41.171457   25697 main.go:141] libmachine: (ha-475401-m02) Calling .Create
	I0912 21:56:41.171601   25697 main.go:141] libmachine: (ha-475401-m02) Creating KVM machine...
	I0912 21:56:41.172840   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found existing default KVM network
	I0912 21:56:41.172966   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found existing private KVM network mk-ha-475401
	I0912 21:56:41.173214   25697 main.go:141] libmachine: (ha-475401-m02) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 ...
	I0912 21:56:41.173239   25697 main.go:141] libmachine: (ha-475401-m02) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:56:41.173286   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.173199   26067 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:56:41.173427   25697 main.go:141] libmachine: (ha-475401-m02) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:56:41.414393   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.414223   26067 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa...
	I0912 21:56:41.650672   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.650552   26067 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/ha-475401-m02.rawdisk...
	I0912 21:56:41.650714   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Writing magic tar header
	I0912 21:56:41.650735   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Writing SSH key tar header
	I0912 21:56:41.650746   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.650666   26067 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 ...
	I0912 21:56:41.650762   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02
	I0912 21:56:41.650822   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 (perms=drwx------)
	I0912 21:56:41.650850   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:56:41.650860   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:56:41.650875   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:56:41.650889   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:56:41.650895   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:56:41.650902   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:56:41.650914   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:56:41.650926   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home
	I0912 21:56:41.650940   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:56:41.650959   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Skipping /home - not owner
	I0912 21:56:41.650988   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:56:41.651006   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:56:41.651020   25697 main.go:141] libmachine: (ha-475401-m02) Creating domain...
	I0912 21:56:41.652092   25697 main.go:141] libmachine: (ha-475401-m02) define libvirt domain using xml: 
	I0912 21:56:41.652119   25697 main.go:141] libmachine: (ha-475401-m02) <domain type='kvm'>
	I0912 21:56:41.652128   25697 main.go:141] libmachine: (ha-475401-m02)   <name>ha-475401-m02</name>
	I0912 21:56:41.652137   25697 main.go:141] libmachine: (ha-475401-m02)   <memory unit='MiB'>2200</memory>
	I0912 21:56:41.652147   25697 main.go:141] libmachine: (ha-475401-m02)   <vcpu>2</vcpu>
	I0912 21:56:41.652157   25697 main.go:141] libmachine: (ha-475401-m02)   <features>
	I0912 21:56:41.652169   25697 main.go:141] libmachine: (ha-475401-m02)     <acpi/>
	I0912 21:56:41.652179   25697 main.go:141] libmachine: (ha-475401-m02)     <apic/>
	I0912 21:56:41.652186   25697 main.go:141] libmachine: (ha-475401-m02)     <pae/>
	I0912 21:56:41.652200   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652233   25697 main.go:141] libmachine: (ha-475401-m02)   </features>
	I0912 21:56:41.652257   25697 main.go:141] libmachine: (ha-475401-m02)   <cpu mode='host-passthrough'>
	I0912 21:56:41.652268   25697 main.go:141] libmachine: (ha-475401-m02)   
	I0912 21:56:41.652283   25697 main.go:141] libmachine: (ha-475401-m02)   </cpu>
	I0912 21:56:41.652295   25697 main.go:141] libmachine: (ha-475401-m02)   <os>
	I0912 21:56:41.652309   25697 main.go:141] libmachine: (ha-475401-m02)     <type>hvm</type>
	I0912 21:56:41.652337   25697 main.go:141] libmachine: (ha-475401-m02)     <boot dev='cdrom'/>
	I0912 21:56:41.652348   25697 main.go:141] libmachine: (ha-475401-m02)     <boot dev='hd'/>
	I0912 21:56:41.652359   25697 main.go:141] libmachine: (ha-475401-m02)     <bootmenu enable='no'/>
	I0912 21:56:41.652370   25697 main.go:141] libmachine: (ha-475401-m02)   </os>
	I0912 21:56:41.652383   25697 main.go:141] libmachine: (ha-475401-m02)   <devices>
	I0912 21:56:41.652400   25697 main.go:141] libmachine: (ha-475401-m02)     <disk type='file' device='cdrom'>
	I0912 21:56:41.652417   25697 main.go:141] libmachine: (ha-475401-m02)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/boot2docker.iso'/>
	I0912 21:56:41.652429   25697 main.go:141] libmachine: (ha-475401-m02)       <target dev='hdc' bus='scsi'/>
	I0912 21:56:41.652442   25697 main.go:141] libmachine: (ha-475401-m02)       <readonly/>
	I0912 21:56:41.652452   25697 main.go:141] libmachine: (ha-475401-m02)     </disk>
	I0912 21:56:41.652476   25697 main.go:141] libmachine: (ha-475401-m02)     <disk type='file' device='disk'>
	I0912 21:56:41.652490   25697 main.go:141] libmachine: (ha-475401-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:56:41.652506   25697 main.go:141] libmachine: (ha-475401-m02)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/ha-475401-m02.rawdisk'/>
	I0912 21:56:41.652520   25697 main.go:141] libmachine: (ha-475401-m02)       <target dev='hda' bus='virtio'/>
	I0912 21:56:41.652532   25697 main.go:141] libmachine: (ha-475401-m02)     </disk>
	I0912 21:56:41.652547   25697 main.go:141] libmachine: (ha-475401-m02)     <interface type='network'>
	I0912 21:56:41.652560   25697 main.go:141] libmachine: (ha-475401-m02)       <source network='mk-ha-475401'/>
	I0912 21:56:41.652568   25697 main.go:141] libmachine: (ha-475401-m02)       <model type='virtio'/>
	I0912 21:56:41.652577   25697 main.go:141] libmachine: (ha-475401-m02)     </interface>
	I0912 21:56:41.652588   25697 main.go:141] libmachine: (ha-475401-m02)     <interface type='network'>
	I0912 21:56:41.652600   25697 main.go:141] libmachine: (ha-475401-m02)       <source network='default'/>
	I0912 21:56:41.652612   25697 main.go:141] libmachine: (ha-475401-m02)       <model type='virtio'/>
	I0912 21:56:41.652624   25697 main.go:141] libmachine: (ha-475401-m02)     </interface>
	I0912 21:56:41.652637   25697 main.go:141] libmachine: (ha-475401-m02)     <serial type='pty'>
	I0912 21:56:41.652648   25697 main.go:141] libmachine: (ha-475401-m02)       <target port='0'/>
	I0912 21:56:41.652655   25697 main.go:141] libmachine: (ha-475401-m02)     </serial>
	I0912 21:56:41.652666   25697 main.go:141] libmachine: (ha-475401-m02)     <console type='pty'>
	I0912 21:56:41.652673   25697 main.go:141] libmachine: (ha-475401-m02)       <target type='serial' port='0'/>
	I0912 21:56:41.652682   25697 main.go:141] libmachine: (ha-475401-m02)     </console>
	I0912 21:56:41.652690   25697 main.go:141] libmachine: (ha-475401-m02)     <rng model='virtio'>
	I0912 21:56:41.652701   25697 main.go:141] libmachine: (ha-475401-m02)       <backend model='random'>/dev/random</backend>
	I0912 21:56:41.652710   25697 main.go:141] libmachine: (ha-475401-m02)     </rng>
	I0912 21:56:41.652718   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652727   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652744   25697 main.go:141] libmachine: (ha-475401-m02)   </devices>
	I0912 21:56:41.652780   25697 main.go:141] libmachine: (ha-475401-m02) </domain>
	I0912 21:56:41.652794   25697 main.go:141] libmachine: (ha-475401-m02) 
	I0912 21:56:41.659649   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:68:a7:8c in network default
	I0912 21:56:41.660258   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring networks are active...
	I0912 21:56:41.660286   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:41.661071   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring network default is active
	I0912 21:56:41.661395   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring network mk-ha-475401 is active
	I0912 21:56:41.661807   25697 main.go:141] libmachine: (ha-475401-m02) Getting domain xml...
	I0912 21:56:41.662483   25697 main.go:141] libmachine: (ha-475401-m02) Creating domain...
	I0912 21:56:42.897026   25697 main.go:141] libmachine: (ha-475401-m02) Waiting to get IP...
	I0912 21:56:42.897711   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:42.898094   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:42.898125   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:42.898073   26067 retry.go:31] will retry after 217.420058ms: waiting for machine to come up
	I0912 21:56:43.117730   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.118102   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.118124   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.118079   26067 retry.go:31] will retry after 330.585414ms: waiting for machine to come up
	I0912 21:56:43.450571   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.451040   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.451079   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.451003   26067 retry.go:31] will retry after 473.887606ms: waiting for machine to come up
	I0912 21:56:43.926694   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.927123   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.927142   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.927090   26067 retry.go:31] will retry after 484.6682ms: waiting for machine to come up
	I0912 21:56:44.413947   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:44.414506   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:44.414530   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:44.414427   26067 retry.go:31] will retry after 570.000136ms: waiting for machine to come up
	I0912 21:56:44.986462   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:44.986909   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:44.986936   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:44.986849   26067 retry.go:31] will retry after 947.956296ms: waiting for machine to come up
	I0912 21:56:45.936372   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:45.936840   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:45.936867   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:45.936791   26067 retry.go:31] will retry after 1.161491429s: waiting for machine to come up
	I0912 21:56:47.099618   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:47.100130   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:47.100155   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:47.100079   26067 retry.go:31] will retry after 1.237357696s: waiting for machine to come up
	I0912 21:56:48.338682   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:48.339181   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:48.339211   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:48.339138   26067 retry.go:31] will retry after 1.321851998s: waiting for machine to come up
	I0912 21:56:49.662997   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:49.663569   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:49.663593   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:49.663528   26067 retry.go:31] will retry after 1.931867868s: waiting for machine to come up
	I0912 21:56:51.596580   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:51.597156   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:51.597293   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:51.596987   26067 retry.go:31] will retry after 2.691762052s: waiting for machine to come up
	I0912 21:56:54.291916   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:54.292477   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:54.292506   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:54.292427   26067 retry.go:31] will retry after 3.403416956s: waiting for machine to come up
	I0912 21:56:57.698211   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:57.698615   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:57.698643   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:57.698559   26067 retry.go:31] will retry after 3.117356745s: waiting for machine to come up
	I0912 21:57:00.819759   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.820426   25697 main.go:141] libmachine: (ha-475401-m02) Found IP for machine: 192.168.39.222
	I0912 21:57:00.820456   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.820465   25697 main.go:141] libmachine: (ha-475401-m02) Reserving static IP address...
	I0912 21:57:00.820919   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find host DHCP lease matching {name: "ha-475401-m02", mac: "52:54:00:ad:31:3a", ip: "192.168.39.222"} in network mk-ha-475401
	I0912 21:57:00.896337   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Getting to WaitForSSH function...
	I0912 21:57:00.896391   25697 main.go:141] libmachine: (ha-475401-m02) Reserved static IP address: 192.168.39.222
	I0912 21:57:00.896405   25697 main.go:141] libmachine: (ha-475401-m02) Waiting for SSH to be available...
	I0912 21:57:00.899059   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.899473   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:00.899499   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.899660   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using SSH client type: external
	I0912 21:57:00.899687   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa (-rw-------)
	I0912 21:57:00.899720   25697 main.go:141] libmachine: (ha-475401-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:57:00.899728   25697 main.go:141] libmachine: (ha-475401-m02) DBG | About to run SSH command:
	I0912 21:57:00.899740   25697 main.go:141] libmachine: (ha-475401-m02) DBG | exit 0
	I0912 21:57:01.021499   25697 main.go:141] libmachine: (ha-475401-m02) DBG | SSH cmd err, output: <nil>: 
	I0912 21:57:01.021784   25697 main.go:141] libmachine: (ha-475401-m02) KVM machine creation complete!
	I0912 21:57:01.022111   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:57:01.022647   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:01.022828   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:01.022982   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:57:01.022995   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 21:57:01.024319   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:57:01.024333   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:57:01.024343   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:57:01.024351   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.027044   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.027459   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.027491   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.027621   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.027808   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.027951   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.028202   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.028403   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.028591   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.028601   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:57:01.128848   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:57:01.128868   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:57:01.128876   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.131443   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.131751   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.131781   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.131911   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.132097   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.132261   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.132399   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.132547   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.132786   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.132802   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:57:01.234033   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:57:01.234093   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:57:01.234102   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:57:01.234111   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.234532   25697 buildroot.go:166] provisioning hostname "ha-475401-m02"
	I0912 21:57:01.234563   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.234770   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.237526   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.237885   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.237913   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.238069   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.238252   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.238432   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.238559   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.238719   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.238945   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.238962   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401-m02 && echo "ha-475401-m02" | sudo tee /etc/hostname
	I0912 21:57:01.356940   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401-m02
	
	I0912 21:57:01.356962   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.360119   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.360549   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.360576   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.360776   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.360977   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.361130   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.361260   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.361437   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.361755   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.361788   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:57:01.474502   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:57:01.474531   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:57:01.474548   25697 buildroot.go:174] setting up certificates
	I0912 21:57:01.474558   25697 provision.go:84] configureAuth start
	I0912 21:57:01.474568   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.474846   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:01.477830   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.478312   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.478345   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.478493   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.481300   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.481744   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.481775   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.481967   25697 provision.go:143] copyHostCerts
	I0912 21:57:01.481995   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:57:01.482023   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:57:01.482033   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:57:01.482116   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:57:01.482210   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:57:01.482233   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:57:01.482242   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:57:01.482282   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:57:01.482385   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:57:01.482422   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:57:01.482433   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:57:01.482473   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:57:01.482538   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401-m02 san=[127.0.0.1 192.168.39.222 ha-475401-m02 localhost minikube]
	I0912 21:57:01.677785   25697 provision.go:177] copyRemoteCerts
	I0912 21:57:01.677843   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:57:01.677865   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.680375   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.680698   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.680726   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.680918   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.681118   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.681278   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.681435   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:01.763387   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:57:01.763463   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:57:01.786565   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:57:01.786649   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:57:01.810853   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:57:01.810938   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:57:01.833609   25697 provision.go:87] duration metric: took 359.040045ms to configureAuth
	I0912 21:57:01.833652   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:57:01.833847   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:01.833966   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.836717   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.837102   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.837133   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.837309   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.837554   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.837721   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.837885   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.838049   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.838242   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.838263   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:57:02.057850   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:57:02.057886   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:57:02.057897   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetURL
	I0912 21:57:02.059171   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using libvirt version 6000000
	I0912 21:57:02.061315   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.061692   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.061722   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.061848   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:57:02.061867   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:57:02.061875   25697 client.go:171] duration metric: took 20.89116902s to LocalClient.Create
	I0912 21:57:02.061904   25697 start.go:167] duration metric: took 20.891228134s to libmachine.API.Create "ha-475401"
	I0912 21:57:02.061918   25697 start.go:293] postStartSetup for "ha-475401-m02" (driver="kvm2")
	I0912 21:57:02.061931   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:57:02.061972   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.062221   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:57:02.062252   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.064772   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.065172   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.065200   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.065317   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.065526   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.065724   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.065954   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.148007   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:57:02.152089   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:57:02.152114   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:57:02.152194   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:57:02.152264   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:57:02.152273   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:57:02.152362   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:57:02.161651   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:57:02.185045   25697 start.go:296] duration metric: took 123.111258ms for postStartSetup
	I0912 21:57:02.185107   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:57:02.185944   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:02.188845   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.189323   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.189349   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.189669   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:02.189901   25697 start.go:128] duration metric: took 21.039650208s to createHost
	I0912 21:57:02.189932   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.192197   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.192685   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.192713   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.192886   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.193095   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.193268   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.193420   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.193586   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:02.193780   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:02.193793   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:57:02.297929   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178222.259138843
	
	I0912 21:57:02.297956   25697 fix.go:216] guest clock: 1726178222.259138843
	I0912 21:57:02.297976   25697 fix.go:229] Guest: 2024-09-12 21:57:02.259138843 +0000 UTC Remote: 2024-09-12 21:57:02.18991842 +0000 UTC m=+66.796933930 (delta=69.220423ms)
	I0912 21:57:02.298002   25697 fix.go:200] guest clock delta is within tolerance: 69.220423ms
	I0912 21:57:02.298009   25697 start.go:83] releasing machines lock for "ha-475401-m02", held for 21.147859148s
	I0912 21:57:02.298040   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.298310   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:02.301169   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.301574   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.301605   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.303680   25697 out.go:177] * Found network options:
	I0912 21:57:02.304732   25697 out.go:177]   - NO_PROXY=192.168.39.203
	W0912 21:57:02.305654   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:57:02.305679   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306187   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306366   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306456   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:57:02.306494   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	W0912 21:57:02.306580   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:57:02.306665   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:57:02.306689   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.309209   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309389   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309562   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.309595   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309698   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.309864   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.309887   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.309892   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309986   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.310055   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.310154   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.310264   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.310268   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.310469   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.533536   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:57:02.541876   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:57:02.541936   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:57:02.557398   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:57:02.557440   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:57:02.557514   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:57:02.576591   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:57:02.593730   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:57:02.593803   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:57:02.610020   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:57:02.628187   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:57:02.766943   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:57:02.932626   25697 docker.go:233] disabling docker service ...
	I0912 21:57:02.932685   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:57:02.946722   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:57:02.959680   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:57:03.085801   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:57:03.211950   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:57:03.224755   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:57:03.241879   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:57:03.241948   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.251810   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:57:03.251876   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.262573   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.273089   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.283322   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:57:03.293496   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.304580   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.321868   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.332457   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:57:03.342938   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:57:03.343001   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:57:03.354986   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:57:03.365096   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:03.487874   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:57:03.584656   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:57:03.584724   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:57:03.591205   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:57:03.591274   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:57:03.595283   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:57:03.632020   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:57:03.632105   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:57:03.659839   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:57:03.689535   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:57:03.690747   25697 out.go:177]   - env NO_PROXY=192.168.39.203
	I0912 21:57:03.691759   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:03.694337   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:03.694692   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:03.694717   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:03.695027   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:57:03.698979   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:57:03.712051   25697 mustload.go:65] Loading cluster: ha-475401
	I0912 21:57:03.712303   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:03.712566   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:03.712592   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:03.726938   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0912 21:57:03.727353   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:03.727820   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:03.727835   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:03.728158   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:03.728354   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:57:03.730112   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:57:03.730449   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:03.730481   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:03.744800   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0912 21:57:03.745195   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:03.745637   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:03.745660   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:03.745972   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:03.746177   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:57:03.746442   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.222
	I0912 21:57:03.746459   25697 certs.go:194] generating shared ca certs ...
	I0912 21:57:03.746476   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.746621   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:57:03.746684   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:57:03.746697   25697 certs.go:256] generating profile certs ...
	I0912 21:57:03.746791   25697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:57:03.746821   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998
	I0912 21:57:03.746833   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.254]
	I0912 21:57:03.895425   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 ...
	I0912 21:57:03.895452   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998: {Name:mk2a12f91c910d3f115f9f1364d04711b2cb2665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.895639   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998 ...
	I0912 21:57:03.895660   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998: {Name:mk196ca5f3a89070abdf1cfc1ff4bafff02be87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.895752   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:57:03.895903   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:57:03.896068   25697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:57:03.896087   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:57:03.896105   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:57:03.896124   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:57:03.896142   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:57:03.896164   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:57:03.896185   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:57:03.896202   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:57:03.896220   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:57:03.896277   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:57:03.896313   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:57:03.896327   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:57:03.896366   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:57:03.896397   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:57:03.896432   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:57:03.896486   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:57:03.896522   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:57:03.896542   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:03.896559   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:57:03.896596   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:57:03.899717   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:03.900035   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:57:03.900059   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:03.900183   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:57:03.900390   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:57:03.900571   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:57:03.900698   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:57:03.974008   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0912 21:57:03.979459   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0912 21:57:03.990042   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0912 21:57:03.994120   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0912 21:57:04.004288   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0912 21:57:04.008162   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0912 21:57:04.018126   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0912 21:57:04.022032   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0912 21:57:04.031919   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0912 21:57:04.035907   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0912 21:57:04.046398   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0912 21:57:04.050375   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0912 21:57:04.060540   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:57:04.085043   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:57:04.108605   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:57:04.132995   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:57:04.157865   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0912 21:57:04.182540   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:57:04.206114   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:57:04.229754   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:57:04.253438   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:57:04.277210   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:57:04.301244   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:57:04.327386   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0912 21:57:04.344517   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0912 21:57:04.361799   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0912 21:57:04.377630   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0912 21:57:04.394094   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0912 21:57:04.410872   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0912 21:57:04.426569   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0912 21:57:04.442369   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:57:04.447699   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:57:04.458193   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.462315   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.462376   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.467859   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:57:04.479011   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:57:04.489740   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.494133   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.494196   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.500017   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:57:04.512533   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:57:04.525896   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.530625   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.530686   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.536234   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:57:04.546650   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:57:04.551285   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:57:04.551332   25697 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0912 21:57:04.551424   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:57:04.551448   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:57:04.551481   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:57:04.570344   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:57:04.570406   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:57:04.570459   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:57:04.581554   25697 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0912 21:57:04.581631   25697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0912 21:57:04.592474   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0912 21:57:04.592505   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:57:04.592553   25697 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0912 21:57:04.592586   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:57:04.592609   25697 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0912 21:57:04.596858   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0912 21:57:04.596892   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0912 21:57:05.686519   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:57:05.701145   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:57:05.701235   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:57:05.705856   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0912 21:57:05.705885   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0912 21:57:06.107758   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:57:06.107829   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:57:06.112751   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0912 21:57:06.112793   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0912 21:57:06.355603   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0912 21:57:06.364464   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0912 21:57:06.380238   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:57:06.396012   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 21:57:06.412331   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:57:06.416180   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:57:06.428760   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:06.551324   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:57:06.567711   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:57:06.568062   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:06.568090   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:06.583279   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
	I0912 21:57:06.583771   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:06.584254   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:06.584277   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:06.584594   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:06.584807   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:57:06.584969   25697 start.go:317] joinCluster: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:57:06.585063   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0912 21:57:06.585078   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:57:06.588502   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:06.588985   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:57:06.589024   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:06.589204   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:57:06.589401   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:57:06.589570   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:57:06.589742   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:57:06.746389   25697 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:06.746432   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c37vl9.mq5q1jgfq9gk00ux --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0912 21:57:28.657053   25697 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c37vl9.mq5q1jgfq9gk00ux --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (21.910594329s)
	I0912 21:57:28.657091   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0912 21:57:29.215977   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401-m02 minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=false
	I0912 21:57:29.341536   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-475401-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0912 21:57:29.458668   25697 start.go:319] duration metric: took 22.873693207s to joinCluster
	I0912 21:57:29.458747   25697 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:29.459041   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:29.460132   25697 out.go:177] * Verifying Kubernetes components...
	I0912 21:57:29.461469   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:29.787500   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:57:29.822464   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:57:29.822795   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0912 21:57:29.822874   25697 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.203:8443
	I0912 21:57:29.823186   25697 node_ready.go:35] waiting up to 6m0s for node "ha-475401-m02" to be "Ready" ...
	I0912 21:57:29.823295   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:29.823306   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:29.823317   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:29.823324   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:29.833517   25697 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0912 21:57:30.324064   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:30.324115   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:30.324128   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:30.324132   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:30.332420   25697 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0912 21:57:30.823976   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:30.824002   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:30.824014   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:30.824022   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:30.829128   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:57:31.324376   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:31.324402   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:31.324409   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:31.324413   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:31.327769   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:31.823392   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:31.823414   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:31.823434   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:31.823437   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:31.826701   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:31.827130   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:32.323505   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:32.323526   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:32.323533   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:32.323536   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:32.326546   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:32.823471   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:32.823544   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:32.823562   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:32.823572   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:32.827544   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:33.323437   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:33.323459   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:33.323467   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:33.323469   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:33.326900   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:33.824361   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:33.824394   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:33.824406   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:33.824410   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:33.834947   25697 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0912 21:57:33.835794   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:34.323728   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:34.323755   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:34.323767   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:34.323775   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:34.326981   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:34.824025   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:34.824051   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:34.824062   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:34.824070   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:34.827392   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:35.323421   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:35.323449   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:35.323460   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:35.323466   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:35.326841   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:35.824036   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:35.824059   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:35.824070   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:35.824076   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:35.827441   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:36.323766   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:36.323791   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:36.323800   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:36.323805   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:36.482302   25697 round_trippers.go:574] Response Status: 200 OK in 158 milliseconds
	I0912 21:57:36.482871   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:36.823392   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:36.823420   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:36.823432   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:36.823437   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:36.826336   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:37.324382   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:37.324411   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:37.324421   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:37.324425   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:37.327434   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:37.823377   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:37.823401   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:37.823429   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:37.823436   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:37.827078   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.324234   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:38.324258   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:38.324266   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:38.324272   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:38.328320   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.823978   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:38.824005   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:38.824017   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:38.824022   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:38.827274   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.827691   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:39.324159   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:39.324187   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:39.324199   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:39.324208   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:39.327462   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:39.823473   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:39.823496   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:39.823501   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:39.823506   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:39.827008   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.323736   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:40.323764   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:40.323772   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:40.323776   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:40.326901   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.823867   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:40.823896   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:40.823904   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:40.823907   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:40.827569   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.828061   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:41.323504   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:41.323528   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:41.323538   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:41.323542   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:41.326788   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:41.824174   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:41.824197   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:41.824204   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:41.824208   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:41.827525   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:42.324366   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:42.324391   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:42.324401   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:42.324408   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:42.328824   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:42.824063   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:42.824086   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:42.824094   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:42.824099   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:42.826890   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:43.323826   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:43.323849   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:43.323858   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:43.323863   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:43.327133   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:43.327637   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:43.824000   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:43.824024   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:43.824031   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:43.824035   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:43.827390   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:44.323365   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:44.323388   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:44.323394   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:44.323397   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:44.327224   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:44.824198   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:44.824220   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:44.824230   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:44.824234   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:44.828384   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:45.324372   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:45.324400   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:45.324410   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:45.324416   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:45.327948   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:45.328685   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:45.824183   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:45.824212   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:45.824227   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:45.824232   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:45.827918   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:46.324352   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:46.324373   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:46.324381   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:46.324384   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:46.328046   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:46.823401   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:46.823428   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:46.823436   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:46.823440   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:46.826851   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.323570   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.323598   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.323609   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.323617   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.326794   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.823591   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.823616   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.823625   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.823629   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.826970   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.827787   25697 node_ready.go:49] node "ha-475401-m02" has status "Ready":"True"
	I0912 21:57:47.827807   25697 node_ready.go:38] duration metric: took 18.004595935s for node "ha-475401-m02" to be "Ready" ...
	I0912 21:57:47.827817   25697 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:57:47.827891   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:47.827902   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.827912   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.827920   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.832287   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:47.838612   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.838684   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pzsv8
	I0912 21:57:47.838693   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.838700   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.838704   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.841359   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.841979   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.841995   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.842002   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.842007   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.844385   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.844960   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.844981   25697 pod_ready.go:82] duration metric: took 6.34685ms for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.844994   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.845046   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xhdj7
	I0912 21:57:47.845053   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.845060   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.845065   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.847572   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.848298   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.848317   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.848344   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.848349   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.850691   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.851203   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.851224   25697 pod_ready.go:82] duration metric: took 6.218717ms for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.851237   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.851294   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401
	I0912 21:57:47.851308   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.851318   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.851341   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.853481   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.854113   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.854130   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.854140   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.854145   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.856201   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.856694   25697 pod_ready.go:93] pod "etcd-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.856712   25697 pod_ready.go:82] duration metric: took 5.468365ms for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.856722   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.856769   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m02
	I0912 21:57:47.856779   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.856786   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.856791   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.859024   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.859583   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.859596   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.859603   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.859608   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.861874   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.862378   25697 pod_ready.go:93] pod "etcd-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.862395   25697 pod_ready.go:82] duration metric: took 5.666663ms for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.862409   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.023695   25697 request.go:632] Waited for 161.233751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:57:48.023764   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:57:48.023770   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.023778   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.023783   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.026897   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.223840   25697 request.go:632] Waited for 196.314299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:48.223905   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:48.223909   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.223916   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.223920   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.226979   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.227551   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:48.227577   25697 pod_ready.go:82] duration metric: took 365.161357ms for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.227587   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.424610   25697 request.go:632] Waited for 196.950952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:57:48.424700   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:57:48.424709   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.424720   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.424730   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.428368   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.624374   25697 request.go:632] Waited for 195.389533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:48.624435   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:48.624440   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.624447   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.624452   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.627638   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.628242   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:48.628263   25697 pod_ready.go:82] duration metric: took 400.668927ms for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.628272   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.824119   25697 request.go:632] Waited for 195.789443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:57:48.824187   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:57:48.824193   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.824202   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.824207   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.827466   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.024475   25697 request.go:632] Waited for 196.3798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.024522   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.024527   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.024534   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.024539   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.027875   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.028452   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.028471   25697 pod_ready.go:82] duration metric: took 400.192567ms for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.028479   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.224430   25697 request.go:632] Waited for 195.868752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:57:49.224506   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:57:49.224524   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.224535   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.224543   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.228098   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.424118   25697 request.go:632] Waited for 195.345067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:49.424187   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:49.424193   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.424200   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.424204   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.427270   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.427807   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.427825   25697 pod_ready.go:82] duration metric: took 399.339766ms for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.427834   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.623957   25697 request.go:632] Waited for 196.060695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:57:49.624034   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:57:49.624048   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.624057   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.624062   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.626942   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:49.823782   25697 request.go:632] Waited for 196.256746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.823834   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.823840   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.823846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.823851   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.827823   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.828409   25697 pod_ready.go:93] pod "kube-proxy-4bk97" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.828426   25697 pod_ready.go:82] duration metric: took 400.586426ms for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.828436   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.024520   25697 request.go:632] Waited for 196.02544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:57:50.024577   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:57:50.024582   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.024589   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.024604   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.028066   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.224042   25697 request.go:632] Waited for 195.348875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:50.224120   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:50.224126   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.224132   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.224135   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.227651   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.228099   25697 pod_ready.go:93] pod "kube-proxy-68h98" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:50.228119   25697 pod_ready.go:82] duration metric: took 399.676133ms for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.228129   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.424324   25697 request.go:632] Waited for 196.110611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:57:50.424387   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:57:50.424393   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.424400   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.424406   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.428055   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.624133   25697 request.go:632] Waited for 195.389452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:50.624189   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:50.624195   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.624202   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.624205   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.627552   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.628154   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:50.628174   25697 pod_ready.go:82] duration metric: took 400.036802ms for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.628188   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.824225   25697 request.go:632] Waited for 195.956305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:57:50.824304   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:57:50.824310   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.824318   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.824323   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.827545   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.024479   25697 request.go:632] Waited for 196.355742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:51.024536   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:51.024543   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.024554   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.024560   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.027674   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.028271   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:51.028290   25697 pod_ready.go:82] duration metric: took 400.093807ms for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:51.028304   25697 pod_ready.go:39] duration metric: took 3.200473413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:57:51.028333   25697 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:57:51.028397   25697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:57:51.043159   25697 api_server.go:72] duration metric: took 21.584379256s to wait for apiserver process to appear ...
	I0912 21:57:51.043180   25697 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:57:51.043199   25697 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0912 21:57:51.047434   25697 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0912 21:57:51.047492   25697 round_trippers.go:463] GET https://192.168.39.203:8443/version
	I0912 21:57:51.047498   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.047505   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.047511   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.048504   25697 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 21:57:51.048587   25697 api_server.go:141] control plane version: v1.31.1
	I0912 21:57:51.048602   25697 api_server.go:131] duration metric: took 5.41647ms to wait for apiserver health ...
	I0912 21:57:51.048610   25697 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:57:51.224407   25697 request.go:632] Waited for 175.739293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.224462   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.224477   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.224497   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.224504   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.229164   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:51.233148   25697 system_pods.go:59] 17 kube-system pods found
	I0912 21:57:51.233176   25697 system_pods.go:61] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:57:51.233181   25697 system_pods.go:61] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:57:51.233185   25697 system_pods.go:61] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:57:51.233189   25697 system_pods.go:61] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:57:51.233192   25697 system_pods.go:61] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:57:51.233195   25697 system_pods.go:61] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:57:51.233198   25697 system_pods.go:61] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:57:51.233202   25697 system_pods.go:61] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:57:51.233208   25697 system_pods.go:61] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:57:51.233214   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:57:51.233217   25697 system_pods.go:61] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:57:51.233222   25697 system_pods.go:61] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:57:51.233226   25697 system_pods.go:61] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:57:51.233229   25697 system_pods.go:61] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:57:51.233232   25697 system_pods.go:61] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:57:51.233235   25697 system_pods.go:61] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:57:51.233238   25697 system_pods.go:61] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:57:51.233243   25697 system_pods.go:74] duration metric: took 184.628871ms to wait for pod list to return data ...
	I0912 21:57:51.233253   25697 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:57:51.424651   25697 request.go:632] Waited for 191.329327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:57:51.424709   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:57:51.424716   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.424723   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.424729   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.428062   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.428262   25697 default_sa.go:45] found service account: "default"
	I0912 21:57:51.428276   25697 default_sa.go:55] duration metric: took 195.017428ms for default service account to be created ...
	I0912 21:57:51.428283   25697 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:57:51.623916   25697 request.go:632] Waited for 195.558331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.623972   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.623980   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.623989   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.623994   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.628142   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:51.632305   25697 system_pods.go:86] 17 kube-system pods found
	I0912 21:57:51.632338   25697 system_pods.go:89] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:57:51.632346   25697 system_pods.go:89] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:57:51.632353   25697 system_pods.go:89] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:57:51.632358   25697 system_pods.go:89] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:57:51.632364   25697 system_pods.go:89] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:57:51.632369   25697 system_pods.go:89] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:57:51.632375   25697 system_pods.go:89] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:57:51.632381   25697 system_pods.go:89] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:57:51.632388   25697 system_pods.go:89] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:57:51.632395   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:57:51.632404   25697 system_pods.go:89] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:57:51.632411   25697 system_pods.go:89] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:57:51.632417   25697 system_pods.go:89] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:57:51.632423   25697 system_pods.go:89] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:57:51.632429   25697 system_pods.go:89] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:57:51.632437   25697 system_pods.go:89] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:57:51.632444   25697 system_pods.go:89] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:57:51.632453   25697 system_pods.go:126] duration metric: took 204.164222ms to wait for k8s-apps to be running ...
	I0912 21:57:51.632462   25697 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:57:51.632512   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:57:51.647575   25697 system_svc.go:56] duration metric: took 15.104684ms WaitForService to wait for kubelet
	I0912 21:57:51.647624   25697 kubeadm.go:582] duration metric: took 22.188845767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:57:51.647646   25697 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:57:51.824082   25697 request.go:632] Waited for 176.361682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes
	I0912 21:57:51.824148   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes
	I0912 21:57:51.824154   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.824161   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.824165   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.827548   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.828398   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:57:51.828423   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:57:51.828435   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:57:51.828438   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:57:51.828443   25697 node_conditions.go:105] duration metric: took 180.791468ms to run NodePressure ...
	I0912 21:57:51.828454   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:57:51.828475   25697 start.go:255] writing updated cluster config ...
	I0912 21:57:51.830711   25697 out.go:201] 
	I0912 21:57:51.832815   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:51.832998   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:51.834854   25697 out.go:177] * Starting "ha-475401-m03" control-plane node in "ha-475401" cluster
	I0912 21:57:51.835855   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:57:51.835876   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:57:51.835962   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:57:51.835972   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:57:51.836050   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:51.836200   25697 start.go:360] acquireMachinesLock for ha-475401-m03: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:57:51.836241   25697 start.go:364] duration metric: took 23.587µs to acquireMachinesLock for "ha-475401-m03"
	I0912 21:57:51.836263   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:51.836398   25697 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0912 21:57:51.838525   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:57:51.838626   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:51.838662   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:51.853763   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I0912 21:57:51.854148   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:51.854771   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:51.854800   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:51.855192   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:51.855420   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:57:51.855603   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:57:51.855816   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:57:51.855843   25697 client.go:168] LocalClient.Create starting
	I0912 21:57:51.855869   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:57:51.855906   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:57:51.855922   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:57:51.855965   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:57:51.855984   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:57:51.855995   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:57:51.856009   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:57:51.856014   25697 main.go:141] libmachine: (ha-475401-m03) Calling .PreCreateCheck
	I0912 21:57:51.856186   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:57:51.856600   25697 main.go:141] libmachine: Creating machine...
	I0912 21:57:51.856627   25697 main.go:141] libmachine: (ha-475401-m03) Calling .Create
	I0912 21:57:51.856771   25697 main.go:141] libmachine: (ha-475401-m03) Creating KVM machine...
	I0912 21:57:51.858042   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found existing default KVM network
	I0912 21:57:51.858204   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found existing private KVM network mk-ha-475401
	I0912 21:57:51.858336   25697 main.go:141] libmachine: (ha-475401-m03) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 ...
	I0912 21:57:51.858361   25697 main.go:141] libmachine: (ha-475401-m03) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:57:51.858418   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:51.858325   26470 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:57:51.858497   25697 main.go:141] libmachine: (ha-475401-m03) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:57:52.089539   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.089395   26470 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa...
	I0912 21:57:52.277087   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.276977   26470 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/ha-475401-m03.rawdisk...
	I0912 21:57:52.277109   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Writing magic tar header
	I0912 21:57:52.277119   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Writing SSH key tar header
	I0912 21:57:52.277127   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.277104   26470 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 ...
	I0912 21:57:52.277208   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03
	I0912 21:57:52.277266   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 (perms=drwx------)
	I0912 21:57:52.277290   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:57:52.277306   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:57:52.277324   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:57:52.277333   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:57:52.277343   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:57:52.277359   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:57:52.277370   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:57:52.277383   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:57:52.277395   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home
	I0912 21:57:52.277410   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Skipping /home - not owner
	I0912 21:57:52.277427   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:57:52.277441   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:57:52.277452   25697 main.go:141] libmachine: (ha-475401-m03) Creating domain...
	I0912 21:57:52.278379   25697 main.go:141] libmachine: (ha-475401-m03) define libvirt domain using xml: 
	I0912 21:57:52.278401   25697 main.go:141] libmachine: (ha-475401-m03) <domain type='kvm'>
	I0912 21:57:52.278410   25697 main.go:141] libmachine: (ha-475401-m03)   <name>ha-475401-m03</name>
	I0912 21:57:52.278427   25697 main.go:141] libmachine: (ha-475401-m03)   <memory unit='MiB'>2200</memory>
	I0912 21:57:52.278440   25697 main.go:141] libmachine: (ha-475401-m03)   <vcpu>2</vcpu>
	I0912 21:57:52.278452   25697 main.go:141] libmachine: (ha-475401-m03)   <features>
	I0912 21:57:52.278466   25697 main.go:141] libmachine: (ha-475401-m03)     <acpi/>
	I0912 21:57:52.278475   25697 main.go:141] libmachine: (ha-475401-m03)     <apic/>
	I0912 21:57:52.278481   25697 main.go:141] libmachine: (ha-475401-m03)     <pae/>
	I0912 21:57:52.278488   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.278494   25697 main.go:141] libmachine: (ha-475401-m03)   </features>
	I0912 21:57:52.278506   25697 main.go:141] libmachine: (ha-475401-m03)   <cpu mode='host-passthrough'>
	I0912 21:57:52.278535   25697 main.go:141] libmachine: (ha-475401-m03)   
	I0912 21:57:52.278555   25697 main.go:141] libmachine: (ha-475401-m03)   </cpu>
	I0912 21:57:52.278573   25697 main.go:141] libmachine: (ha-475401-m03)   <os>
	I0912 21:57:52.278585   25697 main.go:141] libmachine: (ha-475401-m03)     <type>hvm</type>
	I0912 21:57:52.278599   25697 main.go:141] libmachine: (ha-475401-m03)     <boot dev='cdrom'/>
	I0912 21:57:52.278610   25697 main.go:141] libmachine: (ha-475401-m03)     <boot dev='hd'/>
	I0912 21:57:52.278623   25697 main.go:141] libmachine: (ha-475401-m03)     <bootmenu enable='no'/>
	I0912 21:57:52.278637   25697 main.go:141] libmachine: (ha-475401-m03)   </os>
	I0912 21:57:52.278654   25697 main.go:141] libmachine: (ha-475401-m03)   <devices>
	I0912 21:57:52.278665   25697 main.go:141] libmachine: (ha-475401-m03)     <disk type='file' device='cdrom'>
	I0912 21:57:52.278679   25697 main.go:141] libmachine: (ha-475401-m03)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/boot2docker.iso'/>
	I0912 21:57:52.278692   25697 main.go:141] libmachine: (ha-475401-m03)       <target dev='hdc' bus='scsi'/>
	I0912 21:57:52.278706   25697 main.go:141] libmachine: (ha-475401-m03)       <readonly/>
	I0912 21:57:52.278721   25697 main.go:141] libmachine: (ha-475401-m03)     </disk>
	I0912 21:57:52.278735   25697 main.go:141] libmachine: (ha-475401-m03)     <disk type='file' device='disk'>
	I0912 21:57:52.278748   25697 main.go:141] libmachine: (ha-475401-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:57:52.278765   25697 main.go:141] libmachine: (ha-475401-m03)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/ha-475401-m03.rawdisk'/>
	I0912 21:57:52.278776   25697 main.go:141] libmachine: (ha-475401-m03)       <target dev='hda' bus='virtio'/>
	I0912 21:57:52.278788   25697 main.go:141] libmachine: (ha-475401-m03)     </disk>
	I0912 21:57:52.278803   25697 main.go:141] libmachine: (ha-475401-m03)     <interface type='network'>
	I0912 21:57:52.278824   25697 main.go:141] libmachine: (ha-475401-m03)       <source network='mk-ha-475401'/>
	I0912 21:57:52.278834   25697 main.go:141] libmachine: (ha-475401-m03)       <model type='virtio'/>
	I0912 21:57:52.278847   25697 main.go:141] libmachine: (ha-475401-m03)     </interface>
	I0912 21:57:52.278859   25697 main.go:141] libmachine: (ha-475401-m03)     <interface type='network'>
	I0912 21:57:52.278891   25697 main.go:141] libmachine: (ha-475401-m03)       <source network='default'/>
	I0912 21:57:52.278913   25697 main.go:141] libmachine: (ha-475401-m03)       <model type='virtio'/>
	I0912 21:57:52.278927   25697 main.go:141] libmachine: (ha-475401-m03)     </interface>
	I0912 21:57:52.278937   25697 main.go:141] libmachine: (ha-475401-m03)     <serial type='pty'>
	I0912 21:57:52.278948   25697 main.go:141] libmachine: (ha-475401-m03)       <target port='0'/>
	I0912 21:57:52.278958   25697 main.go:141] libmachine: (ha-475401-m03)     </serial>
	I0912 21:57:52.278967   25697 main.go:141] libmachine: (ha-475401-m03)     <console type='pty'>
	I0912 21:57:52.278979   25697 main.go:141] libmachine: (ha-475401-m03)       <target type='serial' port='0'/>
	I0912 21:57:52.279009   25697 main.go:141] libmachine: (ha-475401-m03)     </console>
	I0912 21:57:52.279030   25697 main.go:141] libmachine: (ha-475401-m03)     <rng model='virtio'>
	I0912 21:57:52.279047   25697 main.go:141] libmachine: (ha-475401-m03)       <backend model='random'>/dev/random</backend>
	I0912 21:57:52.279062   25697 main.go:141] libmachine: (ha-475401-m03)     </rng>
	I0912 21:57:52.279072   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.279082   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.279094   25697 main.go:141] libmachine: (ha-475401-m03)   </devices>
	I0912 21:57:52.279104   25697 main.go:141] libmachine: (ha-475401-m03) </domain>
	I0912 21:57:52.279117   25697 main.go:141] libmachine: (ha-475401-m03) 
	I0912 21:57:52.287182   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:ae:8e:80 in network default
	I0912 21:57:52.287812   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring networks are active...
	I0912 21:57:52.287833   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:52.288627   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring network default is active
	I0912 21:57:52.289015   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring network mk-ha-475401 is active
	I0912 21:57:52.289406   25697 main.go:141] libmachine: (ha-475401-m03) Getting domain xml...
	I0912 21:57:52.290192   25697 main.go:141] libmachine: (ha-475401-m03) Creating domain...
	I0912 21:57:53.523717   25697 main.go:141] libmachine: (ha-475401-m03) Waiting to get IP...
	I0912 21:57:53.524447   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:53.524851   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:53.524880   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:53.524829   26470 retry.go:31] will retry after 211.066146ms: waiting for machine to come up
	I0912 21:57:53.737191   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:53.737821   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:53.737850   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:53.737780   26470 retry.go:31] will retry after 360.564631ms: waiting for machine to come up
	I0912 21:57:54.100437   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.100792   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.100819   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.100749   26470 retry.go:31] will retry after 315.401499ms: waiting for machine to come up
	I0912 21:57:54.417313   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.417784   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.417816   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.417729   26470 retry.go:31] will retry after 561.902073ms: waiting for machine to come up
	I0912 21:57:54.981430   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.981899   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.981926   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.981879   26470 retry.go:31] will retry after 546.742528ms: waiting for machine to come up
	I0912 21:57:55.530751   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:55.531432   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:55.531470   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:55.531370   26470 retry.go:31] will retry after 939.461689ms: waiting for machine to come up
	I0912 21:57:56.472480   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:56.472969   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:56.472991   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:56.472923   26470 retry.go:31] will retry after 1.083765874s: waiting for machine to come up
	I0912 21:57:57.557895   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:57.558280   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:57.558304   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:57.558229   26470 retry.go:31] will retry after 1.425560523s: waiting for machine to come up
	I0912 21:57:58.985681   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:58.986215   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:58.986250   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:58.986177   26470 retry.go:31] will retry after 1.198470508s: waiting for machine to come up
	I0912 21:58:00.186460   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:00.186938   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:00.186961   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:00.186891   26470 retry.go:31] will retry after 1.42291773s: waiting for machine to come up
	I0912 21:58:01.611174   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:01.611610   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:01.611640   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:01.611558   26470 retry.go:31] will retry after 2.337610423s: waiting for machine to come up
	I0912 21:58:03.950802   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:03.951256   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:03.951316   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:03.951238   26470 retry.go:31] will retry after 3.426956904s: waiting for machine to come up
	I0912 21:58:07.379354   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:07.379817   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:07.379845   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:07.379772   26470 retry.go:31] will retry after 3.544851931s: waiting for machine to come up
	I0912 21:58:10.926683   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:10.927197   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:10.927220   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:10.927155   26470 retry.go:31] will retry after 4.917848564s: waiting for machine to come up
	I0912 21:58:15.846630   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.847012   25697 main.go:141] libmachine: (ha-475401-m03) Found IP for machine: 192.168.39.113
	I0912 21:58:15.847031   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has current primary IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.847037   25697 main.go:141] libmachine: (ha-475401-m03) Reserving static IP address...
	I0912 21:58:15.847432   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find host DHCP lease matching {name: "ha-475401-m03", mac: "52:54:00:21:aa:da", ip: "192.168.39.113"} in network mk-ha-475401
	I0912 21:58:15.924112   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Getting to WaitForSSH function...
	I0912 21:58:15.924145   25697 main.go:141] libmachine: (ha-475401-m03) Reserved static IP address: 192.168.39.113
	I0912 21:58:15.924157   25697 main.go:141] libmachine: (ha-475401-m03) Waiting for SSH to be available...
	I0912 21:58:15.927256   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.927739   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:15.927769   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.927945   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using SSH client type: external
	I0912 21:58:15.927977   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa (-rw-------)
	I0912 21:58:15.928007   25697 main.go:141] libmachine: (ha-475401-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:58:15.928021   25697 main.go:141] libmachine: (ha-475401-m03) DBG | About to run SSH command:
	I0912 21:58:15.928034   25697 main.go:141] libmachine: (ha-475401-m03) DBG | exit 0
	I0912 21:58:16.054077   25697 main.go:141] libmachine: (ha-475401-m03) DBG | SSH cmd err, output: <nil>: 
	I0912 21:58:16.054379   25697 main.go:141] libmachine: (ha-475401-m03) KVM machine creation complete!
	I0912 21:58:16.054692   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:58:16.055215   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:16.055409   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:16.055558   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:58:16.055574   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 21:58:16.056828   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:58:16.056849   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:58:16.056858   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:58:16.056924   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.058994   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.059438   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.059464   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.059632   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.059837   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.060050   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.060226   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.060439   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.060662   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.060675   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:58:16.164954   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:58:16.164978   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:58:16.164989   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.168451   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.168868   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.168972   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.169138   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.169365   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.169539   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.169766   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.169947   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.170192   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.170213   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:58:16.278282   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:58:16.278355   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:58:16.278363   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:58:16.278375   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.278665   25697 buildroot.go:166] provisioning hostname "ha-475401-m03"
	I0912 21:58:16.278691   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.278907   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.281861   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.282229   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.282257   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.282442   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.282649   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.282806   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.282957   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.283131   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.283286   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.283300   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401-m03 && echo "ha-475401-m03" | sudo tee /etc/hostname
	I0912 21:58:16.401183   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401-m03
	
	I0912 21:58:16.401213   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.404093   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.404465   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.404492   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.404761   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.404983   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.405145   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.405321   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.405500   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.405723   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.405750   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:58:16.518333   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:58:16.518369   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:58:16.518388   25697 buildroot.go:174] setting up certificates
	I0912 21:58:16.518399   25697 provision.go:84] configureAuth start
	I0912 21:58:16.518410   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.518683   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:16.521322   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.521671   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.521721   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.521858   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.524548   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.524936   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.524959   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.525079   25697 provision.go:143] copyHostCerts
	I0912 21:58:16.525109   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:58:16.525147   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:58:16.525157   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:58:16.525244   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:58:16.525336   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:58:16.525364   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:58:16.525375   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:58:16.525413   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:58:16.525474   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:58:16.525499   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:58:16.525511   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:58:16.525542   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:58:16.525604   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401-m03 san=[127.0.0.1 192.168.39.113 ha-475401-m03 localhost minikube]
	I0912 21:58:16.670619   25697 provision.go:177] copyRemoteCerts
	I0912 21:58:16.670682   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:58:16.670708   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.673631   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.673988   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.674015   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.674220   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.674409   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.674603   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.674740   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:16.756476   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:58:16.756559   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:58:16.782422   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:58:16.782506   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 21:58:16.806050   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:58:16.806128   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:58:16.829300   25697 provision.go:87] duration metric: took 310.887198ms to configureAuth
	I0912 21:58:16.829334   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:58:16.829561   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:16.829649   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.832440   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.832782   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.832812   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.832974   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.833170   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.833335   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.833465   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.833695   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.833872   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.833892   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:58:17.065353   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:58:17.065383   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:58:17.065393   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetURL
	I0912 21:58:17.066775   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using libvirt version 6000000
	I0912 21:58:17.069139   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.069522   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.069553   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.069803   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:58:17.069820   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:58:17.069828   25697 client.go:171] duration metric: took 25.213978015s to LocalClient.Create
	I0912 21:58:17.069850   25697 start.go:167] duration metric: took 25.214034971s to libmachine.API.Create "ha-475401"
	I0912 21:58:17.069856   25697 start.go:293] postStartSetup for "ha-475401-m03" (driver="kvm2")
	I0912 21:58:17.069867   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:58:17.069895   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.070147   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:58:17.070176   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.072998   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.073456   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.073487   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.073701   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.073888   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.074057   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.074312   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.156708   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:58:17.160870   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:58:17.160898   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:58:17.160963   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:58:17.161063   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:58:17.161073   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:58:17.161161   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:58:17.171742   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:58:17.195821   25697 start.go:296] duration metric: took 125.954434ms for postStartSetup
	I0912 21:58:17.195873   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:58:17.196500   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:17.199379   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.199796   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.199825   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.200060   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:58:17.200266   25697 start.go:128] duration metric: took 25.363858634s to createHost
	I0912 21:58:17.200287   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.202673   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.203105   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.203133   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.203339   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.203536   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.203738   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.203873   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.204003   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:17.204198   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:17.204209   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:58:17.310187   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178297.287216760
	
	I0912 21:58:17.310209   25697 fix.go:216] guest clock: 1726178297.287216760
	I0912 21:58:17.310218   25697 fix.go:229] Guest: 2024-09-12 21:58:17.28721676 +0000 UTC Remote: 2024-09-12 21:58:17.200277487 +0000 UTC m=+141.807292987 (delta=86.939273ms)
	I0912 21:58:17.310239   25697 fix.go:200] guest clock delta is within tolerance: 86.939273ms
	I0912 21:58:17.310245   25697 start.go:83] releasing machines lock for "ha-475401-m03", held for 25.473992567s
	I0912 21:58:17.310263   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.310511   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:17.313579   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.313972   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.313999   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.316436   25697 out.go:177] * Found network options:
	I0912 21:58:17.317820   25697 out.go:177]   - NO_PROXY=192.168.39.203,192.168.39.222
	W0912 21:58:17.319126   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	W0912 21:58:17.319152   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:58:17.319167   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.319737   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.319950   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.320055   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:58:17.320093   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	W0912 21:58:17.320126   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	W0912 21:58:17.320157   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:58:17.320214   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:58:17.320229   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.323096   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323200   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323521   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.323554   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323666   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.323689   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323693   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.323884   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.323902   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.324020   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.324163   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.324202   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.324315   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.324392   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.556817   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:58:17.563194   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:58:17.563255   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:58:17.578490   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:58:17.578526   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:58:17.578592   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:58:17.594646   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:58:17.609388   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:58:17.609463   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:58:17.623506   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:58:17.638009   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:58:17.757171   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:58:17.919529   25697 docker.go:233] disabling docker service ...
	I0912 21:58:17.919597   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:58:17.936247   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:58:17.949251   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:58:18.080764   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:58:18.226645   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:58:18.240015   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:58:18.257720   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:58:18.257771   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.267777   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:58:18.267845   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.277904   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.287961   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.297816   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:58:18.307898   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.317481   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.334095   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.344337   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:58:18.353785   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:58:18.353844   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:58:18.366829   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:58:18.375790   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:18.502382   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:58:18.594408   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:58:18.594491   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:58:18.599810   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:58:18.599875   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:58:18.603628   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:58:18.642676   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:58:18.642748   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:58:18.671226   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:58:18.705784   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:58:18.707115   25697 out.go:177]   - env NO_PROXY=192.168.39.203
	I0912 21:58:18.708351   25697 out.go:177]   - env NO_PROXY=192.168.39.203,192.168.39.222
	I0912 21:58:18.709381   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:18.712070   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:18.712384   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:18.712411   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:18.712589   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:58:18.716506   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:58:18.727915   25697 mustload.go:65] Loading cluster: ha-475401
	I0912 21:58:18.728133   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:18.728389   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:18.728424   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:18.742999   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0912 21:58:18.743408   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:18.743901   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:18.743924   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:18.744231   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:18.744428   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:58:18.746070   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:58:18.746392   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:18.746428   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:18.762525   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0912 21:58:18.762942   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:18.763434   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:18.763460   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:18.763734   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:18.763919   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:58:18.764061   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.113
	I0912 21:58:18.764070   25697 certs.go:194] generating shared ca certs ...
	I0912 21:58:18.764088   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.764216   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:58:18.764271   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:58:18.764284   25697 certs.go:256] generating profile certs ...
	I0912 21:58:18.764388   25697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:58:18.764419   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c
	I0912 21:58:18.764439   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.113 192.168.39.254]
	I0912 21:58:18.953177   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c ...
	I0912 21:58:18.953215   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c: {Name:mkf24e0813415b85ef4632a7cc37b1377b0685cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.953428   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c ...
	I0912 21:58:18.953449   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c: {Name:mk58abab0883e8bb1ef151ca20853139ede46b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.953569   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:58:18.953774   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:58:18.953910   25697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:58:18.953926   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:58:18.953938   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:58:18.953951   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:58:18.953964   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:58:18.953979   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:58:18.953994   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:58:18.954012   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:58:18.954029   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:58:18.954094   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:58:18.954128   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:58:18.954138   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:58:18.954159   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:58:18.954183   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:58:18.954204   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:58:18.954242   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:58:18.954270   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:58:18.954291   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:18.954305   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:58:18.954347   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:58:18.957523   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:18.957956   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:58:18.957979   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:18.958188   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:58:18.958407   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:58:18.958570   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:58:18.958710   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:58:19.033980   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0912 21:58:19.038605   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0912 21:58:19.049584   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0912 21:58:19.054642   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0912 21:58:19.064670   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0912 21:58:19.069722   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0912 21:58:19.080717   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0912 21:58:19.084846   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0912 21:58:19.094482   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0912 21:58:19.098548   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0912 21:58:19.108676   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0912 21:58:19.112618   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0912 21:58:19.123387   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:58:19.147272   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:58:19.171949   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:58:19.194976   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:58:19.220495   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0912 21:58:19.244949   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:58:19.271742   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:58:19.294753   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:58:19.318268   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:58:19.340684   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:58:19.365814   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:58:19.389740   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0912 21:58:19.405480   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0912 21:58:19.421765   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0912 21:58:19.437326   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0912 21:58:19.453160   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0912 21:58:19.470517   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0912 21:58:19.486080   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0912 21:58:19.501798   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:58:19.507435   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:58:19.517604   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.521723   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.521777   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.526998   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:58:19.537203   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:58:19.547246   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.551547   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.551607   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.557700   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:58:19.568443   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:58:19.578990   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.583232   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.583288   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.589208   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:58:19.602048   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:58:19.606071   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:58:19.606135   25697 kubeadm.go:934] updating node {m03 192.168.39.113 8443 v1.31.1 crio true true} ...
	I0912 21:58:19.606216   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:58:19.606244   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:58:19.606277   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:58:19.622619   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:58:19.622681   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:58:19.622729   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:58:19.632965   25697 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0912 21:58:19.633019   25697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0912 21:58:19.642792   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0912 21:58:19.642844   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:58:19.642797   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0912 21:58:19.642797   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0912 21:58:19.642914   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:58:19.642924   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:58:19.642998   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:58:19.643002   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:58:19.656783   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:58:19.656810   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0912 21:58:19.656841   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0912 21:58:19.656883   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:58:19.656887   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0912 21:58:19.656909   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0912 21:58:19.677837   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0912 21:58:19.677879   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0912 21:58:20.516897   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0912 21:58:20.527105   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0912 21:58:20.543950   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:58:20.560138   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 21:58:20.576473   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:58:20.580703   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:58:20.594636   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:20.711822   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:58:20.728173   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:58:20.728605   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:20.728646   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:20.744851   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0912 21:58:20.745236   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:20.745710   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:20.745733   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:20.746032   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:20.746269   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:58:20.746538   25697 start.go:317] joinCluster: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:58:20.746701   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0912 21:58:20.746722   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:58:20.750060   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:20.750622   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:58:20.750652   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:20.750829   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:58:20.751028   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:58:20.751180   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:58:20.751376   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:58:20.916489   25697 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:58:20.916544   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0xcd8r.97jbzfa11jxyn92v --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m03 --control-plane --apiserver-advertise-address=192.168.39.113 --apiserver-bind-port=8443"
	I0912 21:58:43.506086   25697 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0xcd8r.97jbzfa11jxyn92v --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m03 --control-plane --apiserver-advertise-address=192.168.39.113 --apiserver-bind-port=8443": (22.589509925s)
	I0912 21:58:43.506132   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0912 21:58:44.092103   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401-m03 minikube.k8s.io/updated_at=2024_09_12T21_58_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=false
	I0912 21:58:44.209844   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-475401-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0912 21:58:44.316135   25697 start.go:319] duration metric: took 23.569593336s to joinCluster
	I0912 21:58:44.316216   25697 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:58:44.316520   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:44.317744   25697 out.go:177] * Verifying Kubernetes components...
	I0912 21:58:44.319169   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:44.634041   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:58:44.674413   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:58:44.674780   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0912 21:58:44.674888   25697 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.203:8443
	I0912 21:58:44.675253   25697 node_ready.go:35] waiting up to 6m0s for node "ha-475401-m03" to be "Ready" ...
	I0912 21:58:44.675376   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:44.675393   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:44.675404   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:44.675417   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:44.679106   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:45.176235   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:45.176261   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:45.176274   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:45.176278   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:45.184905   25697 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0912 21:58:45.675803   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:45.675830   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:45.675840   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:45.675846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:45.679887   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:46.175582   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:46.175607   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:46.175615   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:46.175619   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:46.179331   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:46.676220   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:46.676241   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:46.676249   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:46.676254   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:46.679776   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:46.680342   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:47.176086   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:47.176112   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:47.176124   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:47.176131   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:47.179842   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:47.675675   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:47.675697   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:47.675704   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:47.675707   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:47.679110   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:48.175914   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:48.175941   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:48.175952   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:48.175958   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:48.179653   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:48.675571   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:48.675598   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:48.675606   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:48.675613   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:48.678914   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:49.175531   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:49.175560   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:49.175570   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:49.175576   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:49.179597   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:49.180567   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:49.675843   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:49.675868   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:49.675879   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:49.675883   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:49.679316   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:50.175514   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:50.175535   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:50.175547   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:50.175553   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:50.179205   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:50.676071   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:50.676100   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:50.676111   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:50.676118   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:50.679505   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.176103   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:51.176135   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:51.176143   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:51.176147   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:51.179884   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.676344   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:51.676382   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:51.676390   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:51.676393   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:51.680019   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.680654   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:52.176378   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:52.176406   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:52.176414   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:52.176419   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:52.180125   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:52.676247   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:52.676270   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:52.676279   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:52.676282   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:52.679812   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:53.176107   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:53.176131   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:53.176139   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:53.176143   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:53.179717   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:53.675836   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:53.675858   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:53.675869   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:53.675873   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:53.679242   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:54.175768   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:54.175801   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:54.175809   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:54.175815   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:54.179360   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:54.179907   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:54.676422   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:54.676445   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:54.676454   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:54.676457   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:54.680731   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:55.175735   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:55.175757   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:55.175765   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:55.175770   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:55.179554   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:55.676326   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:55.676349   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:55.676357   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:55.676361   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:55.680385   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:56.175676   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:56.175700   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:56.175708   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:56.175711   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:56.179627   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:56.180300   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:56.675674   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:56.675697   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:56.675706   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:56.675710   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:56.679406   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:57.176164   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:57.176186   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:57.176195   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:57.176198   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:57.180244   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:57.676163   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:57.676187   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:57.676194   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:57.676198   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:57.680168   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:58.176214   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:58.176244   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:58.176252   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:58.176255   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:58.179808   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:58.180370   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:58.675737   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:58.675760   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:58.675769   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:58.675777   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:58.679027   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:59.175818   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:59.175842   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:59.175853   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:59.175858   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:59.179845   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:59.675886   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:59.675910   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:59.675918   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:59.675922   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:59.679576   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:00.176363   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:00.176387   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:00.176396   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:00.176400   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:00.180070   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:00.180791   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:59:00.676010   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:00.676033   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:00.676041   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:00.676045   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:00.679249   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:01.175824   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:01.175850   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:01.175858   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:01.175863   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:01.179430   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:01.676207   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:01.676230   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:01.676236   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:01.676240   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:01.680352   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:02.176034   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:02.176068   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.176079   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.176084   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.182766   25697 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0912 21:59:02.183891   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:59:02.676131   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:02.676155   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.676167   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.676172   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.680118   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.680837   25697 node_ready.go:49] node "ha-475401-m03" has status "Ready":"True"
	I0912 21:59:02.680861   25697 node_ready.go:38] duration metric: took 18.005582322s for node "ha-475401-m03" to be "Ready" ...
	I0912 21:59:02.680876   25697 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:59:02.680956   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:02.680969   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.680980   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.680989   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.686922   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:59:02.694423   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.694507   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pzsv8
	I0912 21:59:02.694516   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.694523   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.694526   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.697802   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.698501   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.698516   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.698523   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.698526   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.701276   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.701946   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.701969   25697 pod_ready.go:82] duration metric: took 7.516721ms for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.701982   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.702048   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xhdj7
	I0912 21:59:02.702059   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.702069   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.702077   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.704853   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.705503   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.705520   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.705527   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.705530   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.707979   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.708387   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.708403   25697 pod_ready.go:82] duration metric: took 6.41346ms for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.708414   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.708468   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401
	I0912 21:59:02.708477   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.708487   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.708496   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.711155   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.711915   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.711934   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.711944   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.711951   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.715161   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.715731   25697 pod_ready.go:93] pod "etcd-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.715752   25697 pod_ready.go:82] duration metric: took 7.329765ms for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.715765   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.715842   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m02
	I0912 21:59:02.715854   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.715864   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.715874   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.718893   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.719400   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:02.719415   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.719422   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.719426   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.722428   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.722853   25697 pod_ready.go:93] pod "etcd-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.722869   25697 pod_ready.go:82] duration metric: took 7.097106ms for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.722879   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.876254   25697 request.go:632] Waited for 153.314803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m03
	I0912 21:59:02.876341   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m03
	I0912 21:59:02.876346   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.876354   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.876361   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.883992   25697 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 21:59:03.076941   25697 request.go:632] Waited for 192.395637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:03.077030   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:03.077043   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.077052   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.077060   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.081099   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:03.081567   25697 pod_ready.go:93] pod "etcd-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.081597   25697 pod_ready.go:82] duration metric: took 358.710237ms for pod "etcd-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.081630   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.276994   25697 request.go:632] Waited for 195.296905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:59:03.277081   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:59:03.277091   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.277098   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.277103   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.280354   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.476350   25697 request.go:632] Waited for 195.302508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:03.476410   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:03.476417   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.476424   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.476432   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.480094   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.480496   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.480516   25697 pod_ready.go:82] duration metric: took 398.879405ms for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.480526   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.676749   25697 request.go:632] Waited for 196.161829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:59:03.676829   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:59:03.676835   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.676842   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.676846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.680709   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.876958   25697 request.go:632] Waited for 195.535486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:03.877012   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:03.877023   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.877035   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.877043   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.880284   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.880989   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.881029   25697 pod_ready.go:82] duration metric: took 400.490543ms for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.881048   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.077000   25697 request.go:632] Waited for 195.868605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m03
	I0912 21:59:04.077079   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m03
	I0912 21:59:04.077088   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.077098   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.077103   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.080433   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.276584   25697 request.go:632] Waited for 195.431475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:04.276643   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:04.276649   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.276656   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.276660   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.280579   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.281147   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:04.281165   25697 pod_ready.go:82] duration metric: took 400.103498ms for pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.281175   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.476450   25697 request.go:632] Waited for 195.211156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:59:04.476537   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:59:04.476542   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.476552   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.476561   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.479975   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.677118   25697 request.go:632] Waited for 196.396416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:04.677188   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:04.677195   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.677210   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.677219   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.681081   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.681749   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:04.681769   25697 pod_ready.go:82] duration metric: took 400.585863ms for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.681779   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.876939   25697 request.go:632] Waited for 195.094177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:59:04.877029   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:59:04.877036   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.877047   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.877052   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.881728   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.076794   25697 request.go:632] Waited for 194.366008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:05.076851   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:05.076858   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.076865   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.076868   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.080228   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.080921   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.080941   25697 pod_ready.go:82] duration metric: took 399.152206ms for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.080950   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.277142   25697 request.go:632] Waited for 196.109144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m03
	I0912 21:59:05.277204   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m03
	I0912 21:59:05.277211   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.277220   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.277227   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.280732   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.476710   25697 request.go:632] Waited for 195.280166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:05.476794   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:05.476807   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.476817   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.476822   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.480907   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.481324   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.481339   25697 pod_ready.go:82] duration metric: took 400.382916ms for pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.481350   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.676872   25697 request.go:632] Waited for 195.440769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:59:05.676939   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:59:05.676944   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.676952   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.676957   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.680652   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.876730   25697 request.go:632] Waited for 195.460613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:05.876786   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:05.876792   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.876800   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.876805   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.881785   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.882467   25697 pod_ready.go:93] pod "kube-proxy-4bk97" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.882485   25697 pod_ready.go:82] duration metric: took 401.124997ms for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.882494   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5f8z5" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.076701   25697 request.go:632] Waited for 194.127157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f8z5
	I0912 21:59:06.076754   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f8z5
	I0912 21:59:06.076760   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.076767   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.076773   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.080288   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.276558   25697 request.go:632] Waited for 195.363461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:06.276613   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:06.276619   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.276626   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.276629   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.280083   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.280881   25697 pod_ready.go:93] pod "kube-proxy-5f8z5" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:06.280898   25697 pod_ready.go:82] duration metric: took 398.398398ms for pod "kube-proxy-5f8z5" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.280911   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.476939   25697 request.go:632] Waited for 195.914135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:59:06.477007   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:59:06.477054   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.477075   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.477082   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.484776   25697 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 21:59:06.677078   25697 request.go:632] Waited for 191.25254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:06.677159   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:06.677167   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.677174   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.677181   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.680468   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.681141   25697 pod_ready.go:93] pod "kube-proxy-68h98" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:06.681159   25697 pod_ready.go:82] duration metric: took 400.242392ms for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.681168   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.876743   25697 request.go:632] Waited for 195.498455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:59:06.876808   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:59:06.876815   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.876826   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.876832   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.880459   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.076364   25697 request.go:632] Waited for 195.346788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:07.076454   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:07.076467   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.076480   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.076493   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.080104   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.080763   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.080787   25697 pod_ready.go:82] duration metric: took 399.611316ms for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.080802   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.276802   25697 request.go:632] Waited for 195.91086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:59:07.276867   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:59:07.276872   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.276880   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.276884   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.280548   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.476526   25697 request.go:632] Waited for 195.363073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:07.476584   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:07.476591   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.476600   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.476604   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.479767   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.480265   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.480281   25697 pod_ready.go:82] duration metric: took 399.471583ms for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.480291   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.676470   25697 request.go:632] Waited for 196.120749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m03
	I0912 21:59:07.676538   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m03
	I0912 21:59:07.676544   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.676551   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.676556   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.679917   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.877062   25697 request.go:632] Waited for 196.383558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:07.877130   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:07.877138   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.877150   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.877159   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.880654   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.881172   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.881190   25697 pod_ready.go:82] duration metric: took 400.893675ms for pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.881202   25697 pod_ready.go:39] duration metric: took 5.20031508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:59:07.881215   25697 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:59:07.881262   25697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:59:07.900785   25697 api_server.go:72] duration metric: took 23.584524322s to wait for apiserver process to appear ...
	I0912 21:59:07.900817   25697 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:59:07.900840   25697 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0912 21:59:07.907798   25697 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0912 21:59:07.907875   25697 round_trippers.go:463] GET https://192.168.39.203:8443/version
	I0912 21:59:07.907884   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.907896   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.907906   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.909010   25697 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 21:59:07.909071   25697 api_server.go:141] control plane version: v1.31.1
	I0912 21:59:07.909086   25697 api_server.go:131] duration metric: took 8.262894ms to wait for apiserver health ...
	I0912 21:59:07.909100   25697 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:59:08.076517   25697 request.go:632] Waited for 167.326131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.076589   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.076606   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.076618   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.076627   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.082348   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:59:08.088554   25697 system_pods.go:59] 24 kube-system pods found
	I0912 21:59:08.088580   25697 system_pods.go:61] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:59:08.088585   25697 system_pods.go:61] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:59:08.088589   25697 system_pods.go:61] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:59:08.088592   25697 system_pods.go:61] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:59:08.088595   25697 system_pods.go:61] "etcd-ha-475401-m03" [8724e34b-d305-4597-bca2-c66fac3b4600] Running
	I0912 21:59:08.088598   25697 system_pods.go:61] "kindnet-bh5lg" [ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca] Running
	I0912 21:59:08.088601   25697 system_pods.go:61] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:59:08.088605   25697 system_pods.go:61] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:59:08.088607   25697 system_pods.go:61] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:59:08.088611   25697 system_pods.go:61] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:59:08.088613   25697 system_pods.go:61] "kube-apiserver-ha-475401-m03" [c5bb8141-1cf2-4c9d-9388-25ab86dcdb4f] Running
	I0912 21:59:08.088616   25697 system_pods.go:61] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:59:08.088619   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:59:08.088622   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m03" [75509e84-31f0-4d4f-8fc9-17fa80060318] Running
	I0912 21:59:08.088625   25697 system_pods.go:61] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:59:08.088628   25697 system_pods.go:61] "kube-proxy-5f8z5" [cbd76149-2de8-4f4b-9b54-b71cc0c60cab] Running
	I0912 21:59:08.088631   25697 system_pods.go:61] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:59:08.088636   25697 system_pods.go:61] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:59:08.088641   25697 system_pods.go:61] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:59:08.088644   25697 system_pods.go:61] "kube-scheduler-ha-475401-m03" [e9d051b7-cba8-4054-b17b-5e4fb66e2690] Running
	I0912 21:59:08.088647   25697 system_pods.go:61] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:59:08.088652   25697 system_pods.go:61] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:59:08.088655   25697 system_pods.go:61] "kube-vip-ha-475401-m03" [21ade4a0-8d41-4938-a0cf-19d917b591de] Running
	I0912 21:59:08.088660   25697 system_pods.go:61] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:59:08.088666   25697 system_pods.go:74] duration metric: took 179.557191ms to wait for pod list to return data ...
	I0912 21:59:08.088676   25697 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:59:08.277093   25697 request.go:632] Waited for 188.347544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:59:08.277147   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:59:08.277152   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.277159   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.277164   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.281215   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:08.281325   25697 default_sa.go:45] found service account: "default"
	I0912 21:59:08.281337   25697 default_sa.go:55] duration metric: took 192.654062ms for default service account to be created ...
	I0912 21:59:08.281345   25697 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:59:08.476798   25697 request.go:632] Waited for 195.373202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.476849   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.476854   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.476861   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.476865   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.486585   25697 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0912 21:59:08.493343   25697 system_pods.go:86] 24 kube-system pods found
	I0912 21:59:08.493375   25697 system_pods.go:89] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:59:08.493381   25697 system_pods.go:89] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:59:08.493385   25697 system_pods.go:89] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:59:08.493389   25697 system_pods.go:89] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:59:08.493392   25697 system_pods.go:89] "etcd-ha-475401-m03" [8724e34b-d305-4597-bca2-c66fac3b4600] Running
	I0912 21:59:08.493395   25697 system_pods.go:89] "kindnet-bh5lg" [ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca] Running
	I0912 21:59:08.493399   25697 system_pods.go:89] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:59:08.493402   25697 system_pods.go:89] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:59:08.493405   25697 system_pods.go:89] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:59:08.493409   25697 system_pods.go:89] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:59:08.493412   25697 system_pods.go:89] "kube-apiserver-ha-475401-m03" [c5bb8141-1cf2-4c9d-9388-25ab86dcdb4f] Running
	I0912 21:59:08.493416   25697 system_pods.go:89] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:59:08.493420   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:59:08.493423   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m03" [75509e84-31f0-4d4f-8fc9-17fa80060318] Running
	I0912 21:59:08.493426   25697 system_pods.go:89] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:59:08.493429   25697 system_pods.go:89] "kube-proxy-5f8z5" [cbd76149-2de8-4f4b-9b54-b71cc0c60cab] Running
	I0912 21:59:08.493435   25697 system_pods.go:89] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:59:08.493440   25697 system_pods.go:89] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:59:08.493443   25697 system_pods.go:89] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:59:08.493446   25697 system_pods.go:89] "kube-scheduler-ha-475401-m03" [e9d051b7-cba8-4054-b17b-5e4fb66e2690] Running
	I0912 21:59:08.493449   25697 system_pods.go:89] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:59:08.493452   25697 system_pods.go:89] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:59:08.493454   25697 system_pods.go:89] "kube-vip-ha-475401-m03" [21ade4a0-8d41-4938-a0cf-19d917b591de] Running
	I0912 21:59:08.493457   25697 system_pods.go:89] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:59:08.493464   25697 system_pods.go:126] duration metric: took 212.113521ms to wait for k8s-apps to be running ...
	I0912 21:59:08.493473   25697 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:59:08.493523   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:59:08.511997   25697 system_svc.go:56] duration metric: took 18.515662ms WaitForService to wait for kubelet
	I0912 21:59:08.512026   25697 kubeadm.go:582] duration metric: took 24.195769965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:59:08.512052   25697 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:59:08.676468   25697 request.go:632] Waited for 164.35084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes
	I0912 21:59:08.676536   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes
	I0912 21:59:08.676557   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.676572   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.676579   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.680857   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:08.682202   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682227   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682237   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682240   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682243   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682246   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682250   25697 node_conditions.go:105] duration metric: took 170.192806ms to run NodePressure ...
	I0912 21:59:08.682261   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:59:08.682280   25697 start.go:255] writing updated cluster config ...
	I0912 21:59:08.682550   25697 ssh_runner.go:195] Run: rm -f paused
	I0912 21:59:08.733681   25697 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:59:08.736942   25697 out.go:177] * Done! kubectl is now configured to use "ha-475401" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.868849113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22fe32a6-57ec-40b8-984b-b9540966163b name=/runtime.v1.RuntimeService/Version
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.869832792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da763c97-9023-4529-a66b-e7e92827b44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.870331018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178565870305584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da763c97-9023-4529-a66b-e7e92827b44b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.870862735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb9e8459-cd5b-4888-bacd-742fb0f3680e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.870939219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb9e8459-cd5b-4888-bacd-742fb0f3680e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.871228048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb9e8459-cd5b-4888-bacd-742fb0f3680e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.912645733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ccca6e8-4c65-4be6-b953-b8601a5fcf04 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.912737968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ccca6e8-4c65-4be6-b953-b8601a5fcf04 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.913859966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7064e8b4-a5fc-4349-af49-13588a747cd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.914349928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178565914325447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7064e8b4-a5fc-4349-af49-13588a747cd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.915204312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e766f12-6bff-461b-822f-4b96cea3707a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.915267226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e766f12-6bff-461b-822f-4b96cea3707a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.915505233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e766f12-6bff-461b-822f-4b96cea3707a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.947984764Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fac875c5-65f2-4c33-be3d-014920ea33c7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.948402681Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178349973174937,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1726178214172601414,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178214165828617,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-09-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7fc8738b-56e8-4024-afe7-b552c79dd3f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178213277919991,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T21:56:52.968730435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178201506933282,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178201480986781,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-475401,Uid:6a77994c747e48492b9028f572619aa8,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1726178190109560988,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 6a77994c747e48492b9028f572619aa8,kubernetes.io/config.seen: 2024-09-12T21:56:29.620495627Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-475401,Uid:352a7403576a810ca909a82e8b665d77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178190107385091,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576
a810ca909a82e8b665d77,},Annotations:map[string]string{kubernetes.io/config.hash: 352a7403576a810ca909a82e8b665d77,kubernetes.io/config.seen: 2024-09-12T21:56:29.620492514Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178190103684920,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:29.620494346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa3f11d134c2cbeca4f8
24ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-475401,Uid:980ac58ccfb719847553bfe344364a50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178190089451293,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 980ac58ccfb719847553bfe344364a50,kubernetes.io/config.seen: 2024-09-12T21:56:29.620486106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726178190085057134,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727dab4c45bcae218296d690a83a,kubernetes.io/config.seen: 2024-09-12T21:56:29.620491290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fac875c5-65f2-4c33-be3d-014920ea33c7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.949534849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a82723ec-c94b-4241-b989-812c20133690 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.949597955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a82723ec-c94b-4241-b989-812c20133690 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.950492567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a82723ec-c94b-4241-b989-812c20133690 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.962142424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a40b73f0-d25d-4e8e-8c02-d0acc7f7c8fa name=/runtime.v1.RuntimeService/Version
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.962272967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a40b73f0-d25d-4e8e-8c02-d0acc7f7c8fa name=/runtime.v1.RuntimeService/Version
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.964295574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cb0f64c-3233-4e12-8699-5648de4a8320 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.965243708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178565965163640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cb0f64c-3233-4e12-8699-5648de4a8320 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.966074002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=736e35b3-b101-446e-b6dd-58aa08aa7680 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.966328272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=736e35b3-b101-446e-b6dd-58aa08aa7680 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:02:45 ha-475401 crio[656]: time="2024-09-12 22:02:45.967296464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=736e35b3-b101-446e-b6dd-58aa08aa7680 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	607e14e475ce3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   7fe4fd6a828e2       busybox-7dff88458-l2hdm
	9b36db608ba8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8b265e5bc9493       coredns-7c65d6cfc9-xhdj7
	f56ac218b5509       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   2fdeb00439622       coredns-7c65d6cfc9-pzsv8
	7cb8597aada82       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   66384e83c1a7e       storage-provisioner
	38d31aa5dc410       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   ef4f45d37668b       kindnet-cbfm5
	0891cec467fda       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   d58e93f3f447d       kube-proxy-4bk97
	e9d65acd179a4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   b023c361d20d0       kube-vip-ha-475401
	df088d2d1a92a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   98ca9fd003ad4       kube-apiserver-ha-475401
	4cfa11556cf34       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   aa3f11d134c2c       kube-controller-manager-ha-475401
	17a4293d12cac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   17b7717a92942       kube-scheduler-ha-475401
	5008665ceb8c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e980e3980d971       etcd-ha-475401
	
	
	==> coredns [9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4] <==
	[INFO] 10.244.1.2:38411 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001653266s
	[INFO] 10.244.3.2:56375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004343685s
	[INFO] 10.244.3.2:54377 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172651s
	[INFO] 10.244.3.2:43180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159789s
	[INFO] 10.244.0.4:37709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025493s
	[INFO] 10.244.0.4:58355 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670657s
	[INFO] 10.244.0.4:38422 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110468s
	[INFO] 10.244.1.2:46631 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172109s
	[INFO] 10.244.1.2:34300 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148188s
	[INFO] 10.244.1.2:48603 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001490904s
	[INFO] 10.244.1.2:53797 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095174s
	[INFO] 10.244.3.2:58169 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290075s
	[INFO] 10.244.3.2:32925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114361s
	[INFO] 10.244.0.4:36730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135132s
	[INFO] 10.244.0.4:34478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076546s
	[INFO] 10.244.1.2:55703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157241s
	[INFO] 10.244.1.2:60121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228732s
	[INFO] 10.244.1.2:38242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131949s
	[INFO] 10.244.3.2:38185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132157s
	[INFO] 10.244.3.2:36830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000264113s
	[INFO] 10.244.3.2:49645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196302s
	[INFO] 10.244.0.4:60935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119291s
	[INFO] 10.244.1.2:60943 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082071s
	[INFO] 10.244.1.2:49207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009839s
	[INFO] 10.244.1.2:41020 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060198s
	
	
	==> coredns [f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73] <==
	[INFO] 10.244.1.2:46592 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000089614s
	[INFO] 10.244.3.2:46869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163193s
	[INFO] 10.244.3.2:43702 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000341814s
	[INFO] 10.244.3.2:48838 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007572196s
	[INFO] 10.244.3.2:58405 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145303s
	[INFO] 10.244.3.2:57228 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000229422s
	[INFO] 10.244.0.4:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013812s
	[INFO] 10.244.0.4:39901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001988121s
	[INFO] 10.244.0.4:50914 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026063s
	[INFO] 10.244.0.4:38018 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084673s
	[INFO] 10.244.0.4:49421 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097844s
	[INFO] 10.244.1.2:35174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112144s
	[INFO] 10.244.1.2:45641 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742655s
	[INFO] 10.244.1.2:42943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126184s
	[INFO] 10.244.1.2:48539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090774s
	[INFO] 10.244.3.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115681s
	[INFO] 10.244.3.2:42854 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129882s
	[INFO] 10.244.0.4:47863 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135193s
	[INFO] 10.244.0.4:54893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107279s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200409s
	[INFO] 10.244.3.2:36127 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178104s
	[INFO] 10.244.0.4:56439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119423s
	[INFO] 10.244.0.4:57332 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122479s
	[INFO] 10.244.0.4:54257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113812s
	[INFO] 10.244.1.2:47781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122756s
	
	
	==> describe nodes <==
	Name:               ha-475401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:02:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-475401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f28b923154b09a761fb2715e95e75
	  System UUID:                a21f28b9-2315-4b09-a761-fb2715e95e75
	  Boot ID:                    719d19bb-1949-4b62-be49-e032ba422c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l2hdm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-7c65d6cfc9-pzsv8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 coredns-7c65d6cfc9-xhdj7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 etcd-ha-475401                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m12s
	  kube-system                 kindnet-cbfm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m5s
	  kube-system                 kube-apiserver-ha-475401             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-475401    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-4bk97                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-ha-475401             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-475401                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m17s (x3 over 6m17s)  kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m17s (x4 over 6m17s)  kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x3 over 6m17s)  kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s                  kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s                  kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s                  kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal  NodeReady                5m54s                  kubelet          Node ha-475401 status is now: NodeReady
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	
	
	Name:               ha-475401-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:57:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:00:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-475401-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e177a4c02d5494a80aacc759f5d8434
	  System UUID:                5e177a4c-02d5-494a-80aa-cc759f5d8434
	  Boot ID:                    f35a4238-f901-4ec4-9e96-2614c319a75c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t7gjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-475401-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-k4q6l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-475401-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-475401-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-68h98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-475401-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-vip-ha-475401-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m20s                  cidrAllocator    Node ha-475401-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-475401-m02 status is now: NodeNotReady
	
	
	Name:               ha-475401-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_58_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:58:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:02:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    ha-475401-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cd0b17595342b5a867ee3ae4e5e5f6
	  System UUID:                28cd0b17-5953-42b5-a867-ee3ae4e5e5f6
	  Boot ID:                    91d84a4f-cdff-4c08-9b34-e4ce726e8b2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gb2hg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-475401-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-bh5lg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-475401-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-475401-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-5f8z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-475401-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-vip-ha-475401-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-475401-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-475401-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	
	
	Name:               ha-475401-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_59_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:59:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:02:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 22:00:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-475401-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864edb6a0d14b6abd1a66cf5ac88479
	  System UUID:                9864edb6-a0d1-4b6a-bd1a-66cf5ac88479
	  Boot ID:                    75fc7899-e81c-48a9-bb6d-88d5b2ac6d2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2bvcz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-bmv9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m1s                 cidrAllocator    Node ha-475401-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-475401-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep12 21:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051358] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038808] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep12 21:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929148] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.546825] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.020585] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063471] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182960] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.109592] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.292147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.769780] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +5.095538] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.058539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.038747] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.092804] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.235155] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.799100] kauditd_printk_skb: 38 callbacks suppressed
	[Sep12 21:57] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9] <==
	{"level":"warn","ts":"2024-09-12T22:02:46.203024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.242390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.282031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.294374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.299858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.301494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.302623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.305830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.314294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.325783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.337166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.342642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.342911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.347587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.354526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.364878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.374550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.379467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.382998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.387345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.394681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.404255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.442594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.472167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:02:46.473807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:02:46 up 6 min,  0 users,  load average: 0.44, 0.32, 0.15
	Linux ha-475401 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8] <==
	I0912 22:02:12.854054       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:02:22.858245       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:02:22.858294       1 main.go:299] handling current node
	I0912 22:02:22.858313       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:02:22.858318       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:02:22.858437       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:02:22.858456       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:02:22.858510       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:02:22.858525       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:02:32.859653       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:02:32.859832       1 main.go:299] handling current node
	I0912 22:02:32.859961       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:02:32.859999       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:02:32.860313       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:02:32.860352       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:02:32.860432       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:02:32.860451       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:02:42.854508       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:02:42.854714       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:02:42.854993       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:02:42.855019       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:02:42.855155       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:02:42.855185       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:02:42.855256       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:02:42.855276       1 main.go:299] handling current node
	
	
	==> kube-apiserver [df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f] <==
	W0912 21:56:35.357461       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203]
	I0912 21:56:35.359440       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 21:56:35.365339       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:56:35.381525       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 21:56:36.555399       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 21:56:36.573443       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0912 21:56:36.587621       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 21:56:40.903282       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0912 21:56:41.130589       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0912 21:59:14.641227       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45930: use of closed network connection
	E0912 21:59:14.824837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45956: use of closed network connection
	E0912 21:59:15.017466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45974: use of closed network connection
	E0912 21:59:15.217984       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46000: use of closed network connection
	E0912 21:59:15.419617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46006: use of closed network connection
	E0912 21:59:15.613852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46022: use of closed network connection
	E0912 21:59:15.788040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46036: use of closed network connection
	E0912 21:59:15.970968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46056: use of closed network connection
	E0912 21:59:16.162364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46086: use of closed network connection
	E0912 21:59:16.481705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46106: use of closed network connection
	E0912 21:59:16.664271       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46118: use of closed network connection
	E0912 21:59:16.857842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46144: use of closed network connection
	E0912 21:59:17.034650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46156: use of closed network connection
	E0912 21:59:17.211549       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46184: use of closed network connection
	E0912 21:59:17.374238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46212: use of closed network connection
	W0912 22:00:45.362151       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.203]
	
	
	==> kube-controller-manager [4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4] <==
	I0912 21:59:45.408505       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-475401-m04" podCIDRs=["10.244.4.0/24"]
	I0912 21:59:45.408564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.408593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.433781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.667788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:46.032045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.356349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.631388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.671688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:50.824020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:50.824451       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-475401-m04"
	I0912 21:59:50.963958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:55.640919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.461083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-475401-m04"
	I0912 22:00:05.461773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.479787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.838733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:15.943687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:01:00.863660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-475401-m04"
	I0912 22:01:00.864991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:00.886598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:00.923247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.618002ms"
	I0912 22:01:00.923639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.014µs"
	I0912 22:01:04.362480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:06.158719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	
	
	==> kube-proxy [0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:56:41.912206       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:56:41.930592       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0912 21:56:41.930824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:56:41.968340       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:56:41.968379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:56:41.968403       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:56:41.971058       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:56:41.971979       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:56:41.972047       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:56:41.974515       1 config.go:199] "Starting service config controller"
	I0912 21:56:41.975031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:56:41.975346       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:56:41.975390       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:56:41.976593       1 config.go:328] "Starting node config controller"
	I0912 21:56:41.976636       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:56:42.075847       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:56:42.076026       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:56:42.077390       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818] <==
	W0912 21:56:34.795279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:56:34.795388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:56:37.025887       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 21:58:40.723691       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bh5lg\": pod kindnet-bh5lg is already assigned to node \"ha-475401-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-bh5lg" node="ha-475401-m03"
	E0912 21:58:40.723871       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca(kube-system/kindnet-bh5lg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bh5lg"
	E0912 21:58:40.723922       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bh5lg\": pod kindnet-bh5lg is already assigned to node \"ha-475401-m03\"" pod="kube-system/kindnet-bh5lg"
	I0912 21:58:40.723960       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bh5lg" node="ha-475401-m03"
	E0912 21:59:09.626808       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gb2hg\": pod busybox-7dff88458-gb2hg is already assigned to node \"ha-475401-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gb2hg" node="ha-475401-m02"
	E0912 21:59:09.626992       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gb2hg\": pod busybox-7dff88458-gb2hg is already assigned to node \"ha-475401-m03\"" pod="default/busybox-7dff88458-gb2hg"
	E0912 21:59:09.679559       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-l2hdm\": pod busybox-7dff88458-l2hdm is already assigned to node \"ha-475401\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-l2hdm" node="ha-475401"
	E0912 21:59:09.679624       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ab651ae-e8a0-438a-8bf6-4462c8304466(default/busybox-7dff88458-l2hdm) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-l2hdm"
	E0912 21:59:09.679642       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-l2hdm\": pod busybox-7dff88458-l2hdm is already assigned to node \"ha-475401\"" pod="default/busybox-7dff88458-l2hdm"
	I0912 21:59:09.679663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-l2hdm" node="ha-475401"
	E0912 21:59:09.680271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t7gjx\": pod busybox-7dff88458-t7gjx is already assigned to node \"ha-475401-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t7gjx" node="ha-475401-m02"
	E0912 21:59:09.680327       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8634b0f8-3ad9-4f13-bc5d-4c6c05db092f(default/busybox-7dff88458-t7gjx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-t7gjx"
	E0912 21:59:09.680345       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t7gjx\": pod busybox-7dff88458-t7gjx is already assigned to node \"ha-475401-m02\"" pod="default/busybox-7dff88458-t7gjx"
	I0912 21:59:09.680365       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t7gjx" node="ha-475401-m02"
	E0912 21:59:45.487339       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	E0912 21:59:45.491176       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21f2175a-f898-4059-ae91-9df7019f8cdb(kube-system/kube-proxy-fvw4x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.492064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.490969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	E0912 21:59:45.493554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d40bd7a6-62a0-4e2d-b6eb-2ec57e8eea0f(kube-system/kindnet-2bvcz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2bvcz"
	E0912 21:59:45.493577       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" pod="kube-system/kindnet-2bvcz"
	I0912 21:59:45.493620       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	I0912 21:59:45.493727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	
	
	==> kubelet <==
	Sep 12 22:01:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:01:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:01:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:01:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:01:36 ha-475401 kubelet[1305]: E0912 22:01:36.613037    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178496612742526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:01:36 ha-475401 kubelet[1305]: E0912 22:01:36.613215    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178496612742526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:01:46 ha-475401 kubelet[1305]: E0912 22:01:46.614439    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178506614068423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:01:46 ha-475401 kubelet[1305]: E0912 22:01:46.614482    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178506614068423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:01:56 ha-475401 kubelet[1305]: E0912 22:01:56.616344    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178516615738493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:01:56 ha-475401 kubelet[1305]: E0912 22:01:56.616394    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178516615738493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:06 ha-475401 kubelet[1305]: E0912 22:02:06.618245    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178526617175698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:06 ha-475401 kubelet[1305]: E0912 22:02:06.618968    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178526617175698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:16 ha-475401 kubelet[1305]: E0912 22:02:16.620240    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178536619974087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:16 ha-475401 kubelet[1305]: E0912 22:02:16.620266    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178536619974087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:26 ha-475401 kubelet[1305]: E0912 22:02:26.621988    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178546621750376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:26 ha-475401 kubelet[1305]: E0912 22:02:26.622014    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178546621750376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:36 ha-475401 kubelet[1305]: E0912 22:02:36.500695    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:02:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:02:36 ha-475401 kubelet[1305]: E0912 22:02:36.624871    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178556624300735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:36 ha-475401 kubelet[1305]: E0912 22:02:36.625027    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178556624300735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:46 ha-475401 kubelet[1305]: E0912 22:02:46.627517    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178566627012645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:46 ha-475401 kubelet[1305]: E0912 22:02:46.627565    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178566627012645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-475401 -n ha-475401
helpers_test.go:261: (dbg) Run:  kubectl --context ha-475401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (3.192090776s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:50.948065   30470 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:02:50.948283   30470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:50.948300   30470 out.go:358] Setting ErrFile to fd 2...
	I0912 22:02:50.948305   30470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:50.948478   30470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:02:50.948633   30470 out.go:352] Setting JSON to false
	I0912 22:02:50.948661   30470 mustload.go:65] Loading cluster: ha-475401
	I0912 22:02:50.948711   30470 notify.go:220] Checking for updates...
	I0912 22:02:50.949168   30470 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:02:50.949193   30470 status.go:255] checking status of ha-475401 ...
	I0912 22:02:50.949769   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:50.949830   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:50.967999   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0912 22:02:50.968645   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:50.969690   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:50.969719   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:50.970103   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:50.970350   30470 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:02:50.972060   30470 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:02:50.972078   30470 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:50.972376   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:50.972430   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:50.988429   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0912 22:02:50.988824   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:50.989343   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:50.989378   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:50.989746   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:50.989945   30470 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:02:50.992791   30470 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:50.993222   30470 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:50.993260   30470 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:50.993442   30470 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:50.993808   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:50.993851   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:51.008566   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0912 22:02:51.009033   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:51.009561   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:51.009585   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:51.009892   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:51.010096   30470 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:02:51.010303   30470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:51.010333   30470 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:02:51.013293   30470 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:51.013736   30470 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:51.013759   30470 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:51.013923   30470 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:02:51.014121   30470 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:02:51.014312   30470 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:02:51.014472   30470 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:02:51.096908   30470 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:51.103483   30470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:51.118548   30470 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:51.118589   30470 api_server.go:166] Checking apiserver status ...
	I0912 22:02:51.118632   30470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:51.134025   30470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:02:51.144524   30470 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:51.144574   30470 ssh_runner.go:195] Run: ls
	I0912 22:02:51.148990   30470 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:51.153365   30470 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:51.153386   30470 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:02:51.153395   30470 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:51.153410   30470 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:02:51.153719   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:51.153762   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:51.169072   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0912 22:02:51.169506   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:51.170011   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:51.170032   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:51.170407   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:51.170601   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:02:51.171978   30470 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:02:51.171996   30470 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:51.172279   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:51.172310   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:51.187257   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0912 22:02:51.187658   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:51.188057   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:51.188087   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:51.188425   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:51.188576   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:02:51.191828   30470 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:51.192252   30470 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:51.192274   30470 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:51.192417   30470 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:51.192705   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:51.192737   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:51.208185   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0912 22:02:51.208684   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:51.209154   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:51.209172   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:51.209482   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:51.209702   30470 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:02:51.209895   30470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:51.209914   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:02:51.212463   30470 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:51.212913   30470 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:51.212936   30470 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:51.213063   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:02:51.213246   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:02:51.213392   30470 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:02:51.213543   30470 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:02:53.753893   30470 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:02:53.753988   30470 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:02:53.754003   30470 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:53.754020   30470 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:02:53.754036   30470 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:53.754046   30470 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:02:53.754335   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.754377   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.770161   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0912 22:02:53.770525   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.770982   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.771008   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.771297   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.771614   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:02:53.773207   30470 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:02:53.773224   30470 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:53.773540   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.773607   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.788580   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I0912 22:02:53.789027   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.789458   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.789482   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.789860   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.790057   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:02:53.793143   30470 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:53.793646   30470 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:53.793674   30470 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:53.793839   30470 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:53.794147   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.794181   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.809793   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0912 22:02:53.810243   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.810726   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.810747   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.811047   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.811213   30470 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:02:53.811378   30470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:53.811398   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:02:53.814443   30470 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:53.814917   30470 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:53.814946   30470 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:53.815084   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:02:53.815255   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:02:53.815424   30470 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:02:53.815565   30470 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:02:53.892746   30470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:53.907784   30470 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:53.907818   30470 api_server.go:166] Checking apiserver status ...
	I0912 22:02:53.907853   30470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:53.921232   30470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:02:53.930139   30470 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:53.930199   30470 ssh_runner.go:195] Run: ls
	I0912 22:02:53.934854   30470 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:53.941150   30470 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:53.941182   30470 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:02:53.941192   30470 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:53.941210   30470 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:02:53.941639   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.941679   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.957126   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0912 22:02:53.957596   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.958091   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.958113   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.958468   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.958680   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:02:53.960341   30470 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:02:53.960356   30470 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:53.960735   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.960782   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.976087   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0912 22:02:53.976591   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.977031   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.977062   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.977431   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.977598   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:02:53.980805   30470 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:53.981318   30470 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:53.981348   30470 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:53.981468   30470 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:53.981860   30470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:53.981906   30470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:53.996909   30470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0912 22:02:53.997302   30470 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:53.997759   30470 main.go:141] libmachine: Using API Version  1
	I0912 22:02:53.997787   30470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:53.998171   30470 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:53.998345   30470 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:02:53.998539   30470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:53.998555   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:02:54.001519   30470 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:54.002197   30470 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:54.002238   30470 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:54.002384   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:02:54.002551   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:02:54.002693   30470 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:02:54.002820   30470 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:02:54.084726   30470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:54.098038   30470 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (2.581320878s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:54.655457   30561 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:02:54.655719   30561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:54.655729   30561 out.go:358] Setting ErrFile to fd 2...
	I0912 22:02:54.655734   30561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:54.655951   30561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:02:54.656149   30561 out.go:352] Setting JSON to false
	I0912 22:02:54.656185   30561 mustload.go:65] Loading cluster: ha-475401
	I0912 22:02:54.656366   30561 notify.go:220] Checking for updates...
	I0912 22:02:54.656635   30561 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:02:54.656656   30561 status.go:255] checking status of ha-475401 ...
	I0912 22:02:54.657141   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.657183   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.673008   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0912 22:02:54.673399   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.673965   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.673987   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.674301   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.674501   30561 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:02:54.676020   30561 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:02:54.676038   30561 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:54.676324   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.676371   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.691830   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0912 22:02:54.692197   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.692620   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.692644   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.692944   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.693138   30561 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:02:54.695889   30561 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:54.696264   30561 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:54.696305   30561 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:54.696399   30561 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:54.696695   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.696733   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.711299   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0912 22:02:54.711697   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.712113   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.712138   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.712438   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.712619   30561 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:02:54.712788   30561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:54.712817   30561 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:02:54.715241   30561 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:54.715617   30561 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:54.715655   30561 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:54.715725   30561 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:02:54.715888   30561 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:02:54.716027   30561 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:02:54.716124   30561 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:02:54.796945   30561 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:54.804309   30561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:54.819029   30561 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:54.819066   30561 api_server.go:166] Checking apiserver status ...
	I0912 22:02:54.819109   30561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:54.833372   30561 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:02:54.842926   30561 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:54.842985   30561 ssh_runner.go:195] Run: ls
	I0912 22:02:54.847658   30561 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:54.852390   30561 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:54.852423   30561 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:02:54.852436   30561 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:54.852452   30561 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:02:54.852920   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.852989   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.869591   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0912 22:02:54.869989   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.870450   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.870472   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.870822   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.871037   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:02:54.872579   30561 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:02:54.872598   30561 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:54.872910   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.872954   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.888412   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0912 22:02:54.888816   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.889264   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.889285   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.889605   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.889858   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:02:54.892705   30561 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:54.893146   30561 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:54.893173   30561 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:54.893290   30561 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:54.893582   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:54.893648   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:54.909110   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0912 22:02:54.909563   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:54.910018   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:54.910036   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:54.910305   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:54.910519   30561 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:02:54.910713   30561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:54.910731   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:02:54.913513   30561 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:54.913921   30561 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:54.913956   30561 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:54.914111   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:02:54.914267   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:02:54.914407   30561 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:02:54.914530   30561 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:02:56.825972   30561 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:02:56.826095   30561 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:02:56.826126   30561 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:56.826137   30561 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:02:56.826160   30561 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:56.826172   30561 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:02:56.826507   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:56.826554   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:56.842477   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0912 22:02:56.843022   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:56.843528   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:56.843561   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:56.843880   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:56.844058   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:02:56.845602   30561 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:02:56.845637   30561 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:56.845935   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:56.845983   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:56.861633   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0912 22:02:56.862201   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:56.862700   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:56.862720   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:56.863002   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:56.863204   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:02:56.865999   30561 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:56.866450   30561 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:56.866476   30561 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:56.866697   30561 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:02:56.867071   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:56.867112   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:56.882038   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0912 22:02:56.882414   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:56.882866   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:56.882895   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:56.883162   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:56.883371   30561 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:02:56.883560   30561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:56.883577   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:02:56.886399   30561 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:56.886803   30561 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:02:56.886827   30561 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:02:56.887010   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:02:56.887217   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:02:56.887371   30561 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:02:56.887528   30561 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:02:56.977361   30561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:56.996871   30561 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:56.996905   30561 api_server.go:166] Checking apiserver status ...
	I0912 22:02:56.996940   30561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:57.013148   30561 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:02:57.027800   30561 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:57.027866   30561 ssh_runner.go:195] Run: ls
	I0912 22:02:57.032371   30561 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:57.038920   30561 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:57.038942   30561 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:02:57.038950   30561 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:57.038965   30561 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:02:57.039239   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:57.039273   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:57.054657   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0912 22:02:57.055045   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:57.055482   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:57.055506   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:57.055805   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:57.056007   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:02:57.058134   30561 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:02:57.058150   30561 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:57.058438   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:57.058479   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:57.073479   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45471
	I0912 22:02:57.073877   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:57.074356   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:57.074373   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:57.074724   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:57.074921   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:02:57.077968   30561 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:57.078316   30561 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:57.078339   30561 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:57.078482   30561 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:02:57.078773   30561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:57.078806   30561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:57.093936   30561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0912 22:02:57.094453   30561 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:57.094998   30561 main.go:141] libmachine: Using API Version  1
	I0912 22:02:57.095024   30561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:57.095336   30561 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:57.095553   30561 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:02:57.095758   30561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:57.095785   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:02:57.098490   30561 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:57.098880   30561 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:02:57.098916   30561 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:02:57.099094   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:02:57.099413   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:02:57.099540   30561 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:02:57.099693   30561 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:02:57.180241   30561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:57.193428   30561 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (5.287322316s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:58.064690   30661 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:02:58.064966   30661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:58.064976   30661 out.go:358] Setting ErrFile to fd 2...
	I0912 22:02:58.064980   30661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:58.065264   30661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:02:58.065502   30661 out.go:352] Setting JSON to false
	I0912 22:02:58.065536   30661 mustload.go:65] Loading cluster: ha-475401
	I0912 22:02:58.065601   30661 notify.go:220] Checking for updates...
	I0912 22:02:58.066079   30661 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:02:58.066100   30661 status.go:255] checking status of ha-475401 ...
	I0912 22:02:58.066545   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.066676   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.085012   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0912 22:02:58.085444   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.086055   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.086078   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.086456   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.086679   30661 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:02:58.088252   30661 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:02:58.088270   30661 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:58.088560   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.088599   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.103903   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0912 22:02:58.104366   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.104857   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.104892   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.105277   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.105486   30661 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:02:58.108440   30661 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:58.108857   30661 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:58.108886   30661 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:58.109021   30661 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:02:58.109300   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.109340   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.124698   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0912 22:02:58.125191   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.125802   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.125825   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.126138   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.126347   30661 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:02:58.126581   30661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:58.126612   30661 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:02:58.129485   30661 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:58.129949   30661 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:02:58.129977   30661 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:02:58.130085   30661 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:02:58.130533   30661 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:02:58.130709   30661 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:02:58.130933   30661 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:02:58.214374   30661 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:58.220847   30661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:58.235898   30661 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:02:58.235935   30661 api_server.go:166] Checking apiserver status ...
	I0912 22:02:58.235971   30661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:58.250106   30661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:02:58.259900   30661 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:02:58.259975   30661 ssh_runner.go:195] Run: ls
	I0912 22:02:58.264335   30661 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:02:58.270923   30661 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:02:58.270954   30661 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:02:58.270977   30661 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:58.271004   30661 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:02:58.271436   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.271474   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.286429   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0912 22:02:58.286930   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.287473   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.287488   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.287829   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.288038   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:02:58.289445   30661 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:02:58.289473   30661 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:58.289853   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.289899   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.305142   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0912 22:02:58.305528   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.306115   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.306165   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.306513   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.306713   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:02:58.309710   30661 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:58.310093   30661 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:58.310120   30661 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:58.310347   30661 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:02:58.310644   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:02:58.310674   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:02:58.328158   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0912 22:02:58.328642   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:02:58.329153   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:02:58.329169   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:02:58.329467   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:02:58.329652   30661 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:02:58.329856   30661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:58.329881   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:02:58.332527   30661 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:58.332915   30661 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:02:58.332951   30661 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:02:58.333035   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:02:58.333197   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:02:58.333357   30661 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:02:58.333484   30661 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:02:59.901935   30661 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:02:59.901982   30661 retry.go:31] will retry after 262.732524ms: dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:03:02.969912   30661 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:03:02.970037   30661 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:03:02.970063   30661 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:02.970072   30661 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:03:02.970094   30661 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:02.970108   30661 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:02.970446   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:02.970485   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:02.985669   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38761
	I0912 22:03:02.986103   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:02.986653   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:02.986677   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:02.987031   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:02.987245   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:02.988956   30661 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:02.988974   30661 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:02.989316   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:02.989369   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:03.004134   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0912 22:03:03.004589   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:03.005105   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:03.005131   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:03.005403   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:03.005571   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:03.008474   30661 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:03.008841   30661 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:03.008861   30661 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:03.009020   30661 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:03.009355   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:03.009396   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:03.024122   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0912 22:03:03.024516   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:03.025010   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:03.025036   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:03.025305   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:03.025471   30661 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:03.025665   30661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:03.025691   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:03.028511   30661 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:03.028953   30661 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:03.028978   30661 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:03.029102   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:03.029272   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:03.029444   30661 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:03.029585   30661 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:03.108834   30661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:03.127546   30661 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:03.127580   30661 api_server.go:166] Checking apiserver status ...
	I0912 22:03:03.127626   30661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:03.141798   30661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:03.150963   30661 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:03.151014   30661 ssh_runner.go:195] Run: ls
	I0912 22:03:03.155109   30661 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:03.159457   30661 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:03.159482   30661 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:03.159493   30661 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:03.159512   30661 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:03.159799   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:03.159849   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:03.175888   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0912 22:03:03.176322   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:03.176770   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:03.176797   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:03.177115   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:03.177292   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:03.179007   30661 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:03.179021   30661 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:03.179280   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:03.179310   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:03.193882   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0912 22:03:03.194233   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:03.194663   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:03.194690   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:03.194996   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:03.195181   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:03.197752   30661 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:03.198149   30661 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:03.198183   30661 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:03.198263   30661 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:03.198547   30661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:03.198584   30661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:03.213317   30661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0912 22:03:03.213725   30661 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:03.214140   30661 main.go:141] libmachine: Using API Version  1
	I0912 22:03:03.214167   30661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:03.214459   30661 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:03.214717   30661 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:03.214930   30661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:03.214963   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:03.217542   30661 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:03.217955   30661 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:03.217977   30661 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:03.218128   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:03.218277   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:03.218394   30661 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:03.218505   30661 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:03.296554   30661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:03.310054   30661 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (3.735301008s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:06.057502   30778 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:06.057857   30778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:06.057873   30778 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:06.057880   30778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:06.058319   30778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:06.058656   30778 out.go:352] Setting JSON to false
	I0912 22:03:06.058770   30778 notify.go:220] Checking for updates...
	I0912 22:03:06.058837   30778 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:06.059570   30778 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:06.059592   30778 status.go:255] checking status of ha-475401 ...
	I0912 22:03:06.060161   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.060216   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.078903   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0912 22:03:06.079390   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.079993   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.080037   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.080389   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.080619   30778 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:03:06.082395   30778 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:03:06.082411   30778 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:06.082713   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.082756   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.097788   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0912 22:03:06.098193   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.098634   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.098656   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.098970   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.099167   30778 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:03:06.101924   30778 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:06.102594   30778 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:06.102626   30778 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:06.102821   30778 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:06.103098   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.103130   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.117513   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0912 22:03:06.117943   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.118510   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.118535   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.118842   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.119020   30778 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:03:06.119202   30778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:06.119222   30778 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:03:06.121936   30778 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:06.122370   30778 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:06.122404   30778 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:06.122519   30778 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:03:06.122716   30778 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:03:06.122875   30778 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:03:06.122996   30778 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:03:06.205670   30778 ssh_runner.go:195] Run: systemctl --version
	I0912 22:03:06.211833   30778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:06.235387   30778 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:06.235436   30778 api_server.go:166] Checking apiserver status ...
	I0912 22:03:06.235495   30778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:06.249981   30778 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:03:06.260136   30778 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:06.260200   30778 ssh_runner.go:195] Run: ls
	I0912 22:03:06.264845   30778 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:06.269281   30778 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:06.269307   30778 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:03:06.269320   30778 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:06.269341   30778 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:03:06.269693   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.269727   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.284794   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
	I0912 22:03:06.285270   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.285753   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.285804   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.286128   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.286303   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:03:06.287885   30778 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:03:06.287900   30778 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:06.288283   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.288323   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.304293   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0912 22:03:06.304793   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.305334   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.305365   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.305743   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.305942   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:03:06.309450   30778 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:06.309940   30778 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:06.309975   30778 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:06.310173   30778 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:06.310601   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:06.310644   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:06.325724   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0912 22:03:06.326142   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:06.326756   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:06.326782   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:06.327148   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:06.327372   30778 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:03:06.327579   30778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:06.327614   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:03:06.330740   30778 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:06.331190   30778 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:06.331217   30778 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:06.331506   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:03:06.331695   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:03:06.331878   30778 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:03:06.332017   30778 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:03:09.401958   30778 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:03:09.402052   30778 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:03:09.402073   30778 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:09.402084   30778 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:03:09.402106   30778 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:09.402116   30778 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:09.402470   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.402518   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.417511   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42241
	I0912 22:03:09.418030   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.418544   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.418565   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.418877   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.419072   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:09.420755   30778 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:09.420771   30778 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:09.421106   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.421143   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.436813   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0912 22:03:09.437248   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.437780   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.437802   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.438173   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.438386   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:09.441962   30778 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:09.442505   30778 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:09.442531   30778 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:09.442777   30778 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:09.443102   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.443139   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.458943   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35423
	I0912 22:03:09.459460   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.459980   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.460005   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.460324   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.460493   30778 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:09.460728   30778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:09.460751   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:09.463957   30778 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:09.464474   30778 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:09.464502   30778 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:09.464721   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:09.464908   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:09.465071   30778 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:09.465237   30778 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:09.545102   30778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:09.559407   30778 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:09.559445   30778 api_server.go:166] Checking apiserver status ...
	I0912 22:03:09.559489   30778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:09.575802   30778 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:09.585379   30778 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:09.585465   30778 ssh_runner.go:195] Run: ls
	I0912 22:03:09.589878   30778 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:09.595077   30778 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:09.595104   30778 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:09.595112   30778 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:09.595128   30778 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:09.595421   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.595451   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.610908   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0912 22:03:09.611416   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.611978   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.612000   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.612306   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.612494   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:09.614276   30778 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:09.614292   30778 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:09.614579   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.614622   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.629482   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0912 22:03:09.630055   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.630617   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.630642   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.630951   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.631150   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:09.633945   30778 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:09.634340   30778 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:09.634367   30778 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:09.634509   30778 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:09.634828   30778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:09.634865   30778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:09.649663   30778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I0912 22:03:09.650126   30778 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:09.650586   30778 main.go:141] libmachine: Using API Version  1
	I0912 22:03:09.650606   30778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:09.650902   30778 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:09.651184   30778 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:09.651390   30778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:09.651412   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:09.654239   30778 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:09.654705   30778 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:09.654735   30778 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:09.654883   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:09.655122   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:09.655281   30778 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:09.655426   30778 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:09.737123   30778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:09.750968   30778 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (3.720577493s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:12.216718   30878 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:12.216809   30878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:12.216818   30878 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:12.216822   30878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:12.216994   30878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:12.217133   30878 out.go:352] Setting JSON to false
	I0912 22:03:12.217160   30878 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:12.217196   30878 notify.go:220] Checking for updates...
	I0912 22:03:12.217546   30878 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:12.217559   30878 status.go:255] checking status of ha-475401 ...
	I0912 22:03:12.217964   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.218027   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.236517   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0912 22:03:12.236938   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.237494   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.237512   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.237854   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.238051   30878 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:03:12.239557   30878 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:03:12.239582   30878 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:12.239983   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.240023   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.257385   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
	I0912 22:03:12.257898   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.258442   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.258468   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.258844   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.259033   30878 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:03:12.261666   30878 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:12.262056   30878 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:12.262084   30878 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:12.262194   30878 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:12.262511   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.262561   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.279659   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0912 22:03:12.280076   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.280594   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.280619   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.280949   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.281157   30878 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:03:12.281354   30878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:12.281397   30878 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:03:12.284276   30878 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:12.284700   30878 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:12.284725   30878 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:12.284820   30878 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:03:12.284981   30878 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:03:12.285143   30878 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:03:12.285287   30878 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:03:12.369706   30878 ssh_runner.go:195] Run: systemctl --version
	I0912 22:03:12.375572   30878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:12.388976   30878 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:12.389008   30878 api_server.go:166] Checking apiserver status ...
	I0912 22:03:12.389041   30878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:12.402039   30878 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:03:12.411209   30878 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:12.411266   30878 ssh_runner.go:195] Run: ls
	I0912 22:03:12.415309   30878 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:12.419270   30878 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:12.419291   30878 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:03:12.419301   30878 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:12.419319   30878 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:03:12.419597   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.419629   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.434304   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0912 22:03:12.434687   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.435163   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.435186   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.435546   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.435721   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:03:12.437234   30878 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:03:12.437250   30878 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:12.437519   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.437549   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.452179   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0912 22:03:12.452577   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.452997   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.453017   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.453363   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.453561   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:03:12.456258   30878 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:12.456691   30878 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:12.456725   30878 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:12.456869   30878 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:12.457163   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:12.457220   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:12.472841   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0912 22:03:12.473241   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:12.473716   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:12.473733   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:12.474011   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:12.474252   30878 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:03:12.474437   30878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:12.474459   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:03:12.477074   30878 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:12.477507   30878 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:12.477539   30878 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:12.477688   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:03:12.477852   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:03:12.477961   30878 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:03:12.478079   30878 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:03:15.545883   30878 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:03:15.546004   30878 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:03:15.546023   30878 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:15.546032   30878 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:03:15.546049   30878 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:15.546056   30878 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:15.546418   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.546457   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.561234   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0912 22:03:15.561698   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.562199   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.562222   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.562576   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.562735   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:15.564376   30878 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:15.564392   30878 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:15.564757   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.564803   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.579516   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38667
	I0912 22:03:15.580021   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.580638   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.580672   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.581001   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.581264   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:15.584357   30878 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:15.584829   30878 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:15.584860   30878 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:15.585042   30878 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:15.585342   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.585378   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.600674   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0912 22:03:15.601106   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.601725   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.601748   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.602018   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.602208   30878 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:15.602420   30878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:15.602443   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:15.605439   30878 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:15.605872   30878 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:15.605901   30878 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:15.606070   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:15.606242   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:15.606411   30878 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:15.606540   30878 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:15.686227   30878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:15.702688   30878 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:15.702723   30878 api_server.go:166] Checking apiserver status ...
	I0912 22:03:15.702770   30878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:15.716953   30878 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:15.728190   30878 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:15.728280   30878 ssh_runner.go:195] Run: ls
	I0912 22:03:15.733349   30878 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:15.737930   30878 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:15.737958   30878 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:15.737969   30878 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:15.737987   30878 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:15.738280   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.738330   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.753911   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0912 22:03:15.754353   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.754795   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.754815   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.755150   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.755391   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:15.756896   30878 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:15.756911   30878 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:15.757235   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.757271   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.772738   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0912 22:03:15.773212   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.773741   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.773771   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.774107   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.774281   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:15.777369   30878 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:15.777934   30878 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:15.777960   30878 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:15.778148   30878 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:15.778495   30878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:15.778532   30878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:15.793583   30878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0912 22:03:15.794059   30878 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:15.794586   30878 main.go:141] libmachine: Using API Version  1
	I0912 22:03:15.794607   30878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:15.794963   30878 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:15.795135   30878 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:15.795317   30878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:15.795340   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:15.798007   30878 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:15.798405   30878 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:15.798437   30878 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:15.798572   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:15.798862   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:15.799061   30878 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:15.799296   30878 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:15.881338   30878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:15.895136   30878 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (3.731469139s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:22.253222   30994 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:22.253377   30994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:22.253387   30994 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:22.253394   30994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:22.253571   30994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:22.253778   30994 out.go:352] Setting JSON to false
	I0912 22:03:22.253809   30994 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:22.253917   30994 notify.go:220] Checking for updates...
	I0912 22:03:22.254207   30994 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:22.254225   30994 status.go:255] checking status of ha-475401 ...
	I0912 22:03:22.254599   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.254669   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.272750   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I0912 22:03:22.273152   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.273909   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.273949   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.274289   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.274488   30994 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:03:22.275974   30994 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:03:22.275995   30994 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:22.276316   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.276358   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.291477   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0912 22:03:22.291869   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.292314   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.292329   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.292671   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.292863   30994 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:03:22.295681   30994 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:22.296061   30994 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:22.296093   30994 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:22.296253   30994 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:22.296545   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.296594   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.312993   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0912 22:03:22.313386   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.313959   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.313985   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.314440   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.314653   30994 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:03:22.314873   30994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:22.314901   30994 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:03:22.318224   30994 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:22.318760   30994 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:22.318787   30994 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:22.318885   30994 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:03:22.319088   30994 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:03:22.319241   30994 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:03:22.319394   30994 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:03:22.406084   30994 ssh_runner.go:195] Run: systemctl --version
	I0912 22:03:22.412202   30994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:22.426445   30994 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:22.426477   30994 api_server.go:166] Checking apiserver status ...
	I0912 22:03:22.426506   30994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:22.441005   30994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:03:22.450658   30994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:22.450738   30994 ssh_runner.go:195] Run: ls
	I0912 22:03:22.455250   30994 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:22.459315   30994 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:22.459343   30994 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:03:22.459356   30994 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:22.459375   30994 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:03:22.459685   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.459721   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.474814   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0912 22:03:22.475227   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.475680   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.475705   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.476042   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.476207   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:03:22.477839   30994 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:03:22.477858   30994 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:22.478185   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.478221   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.492841   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0912 22:03:22.493195   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.493649   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.493672   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.493997   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.494180   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:03:22.496837   30994 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:22.497290   30994 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:22.497315   30994 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:22.497503   30994 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:03:22.497865   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:22.497908   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:22.515485   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0912 22:03:22.515869   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:22.516291   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:22.516315   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:22.516567   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:22.516758   30994 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:03:22.516931   30994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:22.516951   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:03:22.519716   30994 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:22.520115   30994 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:03:22.520150   30994 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:03:22.520307   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:03:22.520471   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:03:22.520622   30994 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:03:22.520754   30994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	W0912 22:03:25.597885   30994 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0912 22:03:25.597997   30994 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0912 22:03:25.598019   30994 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:25.598029   30994 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:03:25.598053   30994 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0912 22:03:25.598066   30994 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:25.598379   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.598427   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.613901   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45701
	I0912 22:03:25.614318   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.614837   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.614860   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.615248   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.615477   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:25.616871   30994 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:25.616884   30994 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:25.617276   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.617322   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.633486   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I0912 22:03:25.633886   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.634294   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.634316   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.634661   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.634832   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:25.638005   30994 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:25.638423   30994 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:25.638442   30994 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:25.638651   30994 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:25.638932   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.638967   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.653819   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0912 22:03:25.654205   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.654757   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.654781   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.655047   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.655240   30994 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:25.655409   30994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:25.655431   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:25.658289   30994 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:25.658822   30994 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:25.658851   30994 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:25.659043   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:25.659220   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:25.659418   30994 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:25.659561   30994 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:25.741992   30994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:25.756763   30994 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:25.756790   30994 api_server.go:166] Checking apiserver status ...
	I0912 22:03:25.756820   30994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:25.770030   30994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:25.779490   30994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:25.779572   30994 ssh_runner.go:195] Run: ls
	I0912 22:03:25.783731   30994 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:25.788081   30994 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:25.788103   30994 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:25.788111   30994 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:25.788132   30994 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:25.788429   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.788462   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.803127   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0912 22:03:25.803505   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.803974   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.803991   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.804303   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.804502   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:25.806137   30994 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:25.806155   30994 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:25.806434   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.806483   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.821508   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0912 22:03:25.821950   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.822431   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.822455   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.822743   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.822936   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:25.825566   30994 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:25.825968   30994 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:25.826002   30994 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:25.826126   30994 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:25.826422   30994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:25.826455   30994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:25.840963   30994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0912 22:03:25.841398   30994 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:25.841911   30994 main.go:141] libmachine: Using API Version  1
	I0912 22:03:25.841957   30994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:25.842330   30994 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:25.842504   30994 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:25.842697   30994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:25.842721   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:25.845234   30994 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:25.845763   30994 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:25.845790   30994 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:25.845936   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:25.846092   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:25.846242   30994 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:25.846363   30994 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:25.928590   30994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:25.942314   30994 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0912 22:03:30.268274   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 7 (629.586252ms)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:35.192198   31148 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:35.192468   31148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:35.192478   31148 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:35.192482   31148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:35.192686   31148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:35.192843   31148 out.go:352] Setting JSON to false
	I0912 22:03:35.192869   31148 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:35.192913   31148 notify.go:220] Checking for updates...
	I0912 22:03:35.193407   31148 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:35.193427   31148 status.go:255] checking status of ha-475401 ...
	I0912 22:03:35.193888   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.193948   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.212233   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0912 22:03:35.212637   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.213131   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.213153   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.213676   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.213896   31148 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:03:35.216018   31148 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:03:35.216040   31148 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:35.216361   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.216394   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.231434   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0912 22:03:35.231969   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.232446   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.232469   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.232778   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.232948   31148 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:03:35.235937   31148 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:35.236434   31148 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:35.236470   31148 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:35.236582   31148 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:35.236863   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.236899   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.251462   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0912 22:03:35.252012   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.252613   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.252636   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.253010   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.253191   31148 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:03:35.253399   31148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:35.253431   31148 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:03:35.256206   31148 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:35.256622   31148 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:35.256650   31148 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:35.256825   31148 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:03:35.257058   31148 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:03:35.257213   31148 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:03:35.257350   31148 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:03:35.347500   31148 ssh_runner.go:195] Run: systemctl --version
	I0912 22:03:35.353749   31148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:35.369441   31148 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:35.369477   31148 api_server.go:166] Checking apiserver status ...
	I0912 22:03:35.369514   31148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:35.384604   31148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:03:35.396651   31148 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:35.396715   31148 ssh_runner.go:195] Run: ls
	I0912 22:03:35.402056   31148 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:35.406736   31148 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:35.406760   31148 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:03:35.406770   31148 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:35.406791   31148 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:03:35.407135   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.407172   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.421975   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0912 22:03:35.422428   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.422931   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.422951   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.423324   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.423544   31148 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:03:35.425216   31148 status.go:330] ha-475401-m02 host status = "Stopped" (err=<nil>)
	I0912 22:03:35.425229   31148 status.go:343] host is not running, skipping remaining checks
	I0912 22:03:35.425235   31148 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:35.425251   31148 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:35.425540   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.425573   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.440304   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0912 22:03:35.440640   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.441123   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.441149   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.441473   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.441740   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:35.443522   31148 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:35.443540   31148 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:35.443926   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.443975   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.461027   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I0912 22:03:35.461437   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.461975   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.462002   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.462292   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.462475   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:35.465333   31148 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:35.465852   31148 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:35.465876   31148 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:35.465962   31148 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:35.466284   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.466325   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.481458   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0912 22:03:35.481850   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.482302   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.482324   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.482639   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.482810   31148 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:35.483024   31148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:35.483042   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:35.485654   31148 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:35.486025   31148 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:35.486050   31148 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:35.486198   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:35.486353   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:35.486509   31148 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:35.486660   31148 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:35.565795   31148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:35.580197   31148 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:35.580227   31148 api_server.go:166] Checking apiserver status ...
	I0912 22:03:35.580270   31148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:35.593946   31148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:35.603767   31148 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:35.603839   31148 ssh_runner.go:195] Run: ls
	I0912 22:03:35.608381   31148 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:35.616005   31148 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:35.616034   31148 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:35.616045   31148 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:35.616065   31148 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:35.616415   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.616451   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.631989   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0912 22:03:35.632419   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.632934   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.632965   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.633363   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.633550   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:35.635474   31148 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:35.635490   31148 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:35.635892   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.635943   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.652123   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I0912 22:03:35.652566   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.653069   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.653091   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.653449   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.653690   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:35.656739   31148 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:35.657208   31148 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:35.657237   31148 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:35.657359   31148 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:35.657708   31148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:35.657744   31148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:35.673194   31148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0912 22:03:35.673693   31148 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:35.674218   31148 main.go:141] libmachine: Using API Version  1
	I0912 22:03:35.674238   31148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:35.674535   31148 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:35.674716   31148 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:35.674889   31148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:35.674907   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:35.677724   31148 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:35.678118   31148 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:35.678149   31148 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:35.678287   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:35.678488   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:35.678646   31148 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:35.678809   31148 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:35.765106   31148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:35.779966   31148 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 7 (615.495723ms)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475401-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:49.204417   31253 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:49.205004   31253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:49.205021   31253 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:49.205029   31253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:49.205460   31253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:49.205976   31253 out.go:352] Setting JSON to false
	I0912 22:03:49.206007   31253 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:49.206182   31253 notify.go:220] Checking for updates...
	I0912 22:03:49.206447   31253 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:49.206463   31253 status.go:255] checking status of ha-475401 ...
	I0912 22:03:49.206904   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.206972   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.222060   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0912 22:03:49.222547   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.223247   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.223273   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.223633   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.223842   31253 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:03:49.225680   31253 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:03:49.225701   31253 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:49.226314   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.226357   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.241017   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0912 22:03:49.241420   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.241861   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.241882   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.242165   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.242371   31253 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:03:49.245167   31253 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:49.245579   31253 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:49.245604   31253 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:49.245761   31253 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:03:49.246098   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.246145   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.260237   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0912 22:03:49.260659   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.261071   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.261093   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.261402   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.261582   31253 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:03:49.261775   31253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:49.261796   31253 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:03:49.264883   31253 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:49.265355   31253 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:03:49.265398   31253 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:03:49.265548   31253 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:03:49.265743   31253 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:03:49.265906   31253 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:03:49.266062   31253 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:03:49.348879   31253 ssh_runner.go:195] Run: systemctl --version
	I0912 22:03:49.355645   31253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:49.369553   31253 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:49.369598   31253 api_server.go:166] Checking apiserver status ...
	I0912 22:03:49.369664   31253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:49.383881   31253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0912 22:03:49.394310   31253 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:49.394403   31253 ssh_runner.go:195] Run: ls
	I0912 22:03:49.399313   31253 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:49.405711   31253 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:49.405744   31253 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:03:49.405756   31253 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:49.405772   31253 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:03:49.406172   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.406214   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.421281   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0912 22:03:49.421837   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.422437   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.422472   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.422870   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.423106   31253 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:03:49.424705   31253 status.go:330] ha-475401-m02 host status = "Stopped" (err=<nil>)
	I0912 22:03:49.424720   31253 status.go:343] host is not running, skipping remaining checks
	I0912 22:03:49.424727   31253 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:49.424744   31253 status.go:255] checking status of ha-475401-m03 ...
	I0912 22:03:49.425015   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.425049   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.439868   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0912 22:03:49.440316   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.440813   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.440832   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.441160   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.441356   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:49.443008   31253 status.go:330] ha-475401-m03 host status = "Running" (err=<nil>)
	I0912 22:03:49.443025   31253 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:49.443314   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.443364   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.458586   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0912 22:03:49.459111   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.459581   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.459604   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.459909   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.460118   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 22:03:49.463068   31253 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:49.463515   31253 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:49.463539   31253 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:49.463678   31253 host.go:66] Checking if "ha-475401-m03" exists ...
	I0912 22:03:49.464021   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.464062   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.479810   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0912 22:03:49.480214   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.480789   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.480809   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.481164   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.481388   31253 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:49.481570   31253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:49.481596   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:49.484419   31253 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:49.484843   31253 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:49.484868   31253 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:49.485036   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:49.485305   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:49.485450   31253 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:49.485599   31253 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:49.565192   31253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:49.579122   31253 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:03:49.579171   31253 api_server.go:166] Checking apiserver status ...
	I0912 22:03:49.579214   31253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:03:49.593287   31253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0912 22:03:49.602872   31253 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:03:49.602926   31253 ssh_runner.go:195] Run: ls
	I0912 22:03:49.607667   31253 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:03:49.617604   31253 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:03:49.617661   31253 status.go:422] ha-475401-m03 apiserver status = Running (err=<nil>)
	I0912 22:03:49.617673   31253 status.go:257] ha-475401-m03 status: &{Name:ha-475401-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:03:49.617692   31253 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:03:49.618119   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.618171   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.633213   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0912 22:03:49.633703   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.634177   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.634198   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.634569   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.634763   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:49.636427   31253 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:03:49.636443   31253 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:49.636726   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.636758   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.652393   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35717
	I0912 22:03:49.652819   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.653291   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.653325   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.653662   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.653824   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:03:49.656527   31253 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:49.657089   31253 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:49.657121   31253 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:49.657333   31253 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:03:49.657749   31253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:49.657802   31253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:49.673897   31253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0912 22:03:49.674441   31253 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:49.675030   31253 main.go:141] libmachine: Using API Version  1
	I0912 22:03:49.675059   31253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:49.675422   31253 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:49.675656   31253 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:49.675880   31253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:03:49.675901   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:49.679117   31253 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:49.679588   31253 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:49.679643   31253 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:49.679775   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:49.679942   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:49.680096   31253 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:49.680194   31253 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:49.761191   31253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:03:49.776640   31253 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-475401 -n ha-475401
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-475401 logs -n 25: (1.36295254s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m03_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m04 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp testdata/cp-test.txt                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m04_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03:/home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m03 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-475401 node stop m02 -v=7                                                     | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-475401 node start m02 -v=7                                                    | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:55:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:55:55.426662   25697 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:55:55.426769   25697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:55.426777   25697 out.go:358] Setting ErrFile to fd 2...
	I0912 21:55:55.426782   25697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:55.426970   25697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:55:55.427570   25697 out.go:352] Setting JSON to false
	I0912 21:55:55.428381   25697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2297,"bootTime":1726175858,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:55:55.428435   25697 start.go:139] virtualization: kvm guest
	I0912 21:55:55.430362   25697 out.go:177] * [ha-475401] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:55:55.431727   25697 notify.go:220] Checking for updates...
	I0912 21:55:55.431746   25697 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:55:55.433411   25697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:55:55.434746   25697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:55:55.435913   25697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.437185   25697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:55:55.438546   25697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:55:55.439941   25697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:55:55.474955   25697 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 21:55:55.475932   25697 start.go:297] selected driver: kvm2
	I0912 21:55:55.475950   25697 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:55:55.475961   25697 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:55:55.476675   25697 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:55:55.476754   25697 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:55:55.491945   25697 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:55:55.491990   25697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:55:55.492245   25697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:55:55.492299   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:55:55.492310   25697 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0912 21:55:55.492317   25697 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 21:55:55.492370   25697 start.go:340] cluster config:
	{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0912 21:55:55.492458   25697 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:55:55.494210   25697 out.go:177] * Starting "ha-475401" primary control-plane node in "ha-475401" cluster
	I0912 21:55:55.495388   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:55:55.495421   25697 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:55:55.495430   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:55:55.495538   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:55:55.495551   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:55:55.495841   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:55:55.495861   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json: {Name:mk01f80c972669e9d15ecf56763c72c858d056e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:55:55.496014   25697 start.go:360] acquireMachinesLock for ha-475401: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:55:55.496047   25697 start.go:364] duration metric: took 18.665µs to acquireMachinesLock for "ha-475401"
	I0912 21:55:55.496069   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:55:55.496154   25697 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 21:55:55.497510   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:55:55.497690   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:55.497732   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:55.512119   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0912 21:55:55.512575   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:55.513086   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:55:55.513105   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:55.513393   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:55.513561   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:55:55.513730   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:55:55.513887   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:55:55.513916   25697 client.go:168] LocalClient.Create starting
	I0912 21:55:55.513951   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:55:55.513981   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:55:55.513996   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:55:55.514051   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:55:55.514068   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:55:55.514083   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:55:55.514102   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:55:55.514110   25697 main.go:141] libmachine: (ha-475401) Calling .PreCreateCheck
	I0912 21:55:55.514450   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:55:55.514824   25697 main.go:141] libmachine: Creating machine...
	I0912 21:55:55.514837   25697 main.go:141] libmachine: (ha-475401) Calling .Create
	I0912 21:55:55.514977   25697 main.go:141] libmachine: (ha-475401) Creating KVM machine...
	I0912 21:55:55.516343   25697 main.go:141] libmachine: (ha-475401) DBG | found existing default KVM network
	I0912 21:55:55.517067   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.516928   25720 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0912 21:55:55.517112   25697 main.go:141] libmachine: (ha-475401) DBG | created network xml: 
	I0912 21:55:55.517136   25697 main.go:141] libmachine: (ha-475401) DBG | <network>
	I0912 21:55:55.517146   25697 main.go:141] libmachine: (ha-475401) DBG |   <name>mk-ha-475401</name>
	I0912 21:55:55.517152   25697 main.go:141] libmachine: (ha-475401) DBG |   <dns enable='no'/>
	I0912 21:55:55.517160   25697 main.go:141] libmachine: (ha-475401) DBG |   
	I0912 21:55:55.517176   25697 main.go:141] libmachine: (ha-475401) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 21:55:55.517187   25697 main.go:141] libmachine: (ha-475401) DBG |     <dhcp>
	I0912 21:55:55.517195   25697 main.go:141] libmachine: (ha-475401) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 21:55:55.517206   25697 main.go:141] libmachine: (ha-475401) DBG |     </dhcp>
	I0912 21:55:55.517213   25697 main.go:141] libmachine: (ha-475401) DBG |   </ip>
	I0912 21:55:55.517223   25697 main.go:141] libmachine: (ha-475401) DBG |   
	I0912 21:55:55.517231   25697 main.go:141] libmachine: (ha-475401) DBG | </network>
	I0912 21:55:55.517244   25697 main.go:141] libmachine: (ha-475401) DBG | 
	I0912 21:55:55.522134   25697 main.go:141] libmachine: (ha-475401) DBG | trying to create private KVM network mk-ha-475401 192.168.39.0/24...
	I0912 21:55:55.589414   25697 main.go:141] libmachine: (ha-475401) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 ...
	I0912 21:55:55.589450   25697 main.go:141] libmachine: (ha-475401) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:55:55.589460   25697 main.go:141] libmachine: (ha-475401) DBG | private KVM network mk-ha-475401 192.168.39.0/24 created
	I0912 21:55:55.589474   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.589377   25720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.589532   25697 main.go:141] libmachine: (ha-475401) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:55:55.831888   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.831762   25720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa...
	I0912 21:55:55.895303   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.895144   25720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/ha-475401.rawdisk...
	I0912 21:55:55.895341   25697 main.go:141] libmachine: (ha-475401) DBG | Writing magic tar header
	I0912 21:55:55.895355   25697 main.go:141] libmachine: (ha-475401) DBG | Writing SSH key tar header
	I0912 21:55:55.895380   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:55.895305   25720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 ...
	I0912 21:55:55.895481   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401
	I0912 21:55:55.895501   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401 (perms=drwx------)
	I0912 21:55:55.895511   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:55:55.895525   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:55.895535   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:55:55.895546   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:55:55.895565   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:55:55.895572   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:55:55.895580   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:55:55.895603   25697 main.go:141] libmachine: (ha-475401) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:55:55.895612   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:55:55.895623   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:55:55.895632   25697 main.go:141] libmachine: (ha-475401) DBG | Checking permissions on dir: /home
	I0912 21:55:55.895643   25697 main.go:141] libmachine: (ha-475401) DBG | Skipping /home - not owner
	I0912 21:55:55.895658   25697 main.go:141] libmachine: (ha-475401) Creating domain...
	I0912 21:55:55.896804   25697 main.go:141] libmachine: (ha-475401) define libvirt domain using xml: 
	I0912 21:55:55.896825   25697 main.go:141] libmachine: (ha-475401) <domain type='kvm'>
	I0912 21:55:55.896831   25697 main.go:141] libmachine: (ha-475401)   <name>ha-475401</name>
	I0912 21:55:55.896836   25697 main.go:141] libmachine: (ha-475401)   <memory unit='MiB'>2200</memory>
	I0912 21:55:55.896841   25697 main.go:141] libmachine: (ha-475401)   <vcpu>2</vcpu>
	I0912 21:55:55.896845   25697 main.go:141] libmachine: (ha-475401)   <features>
	I0912 21:55:55.896850   25697 main.go:141] libmachine: (ha-475401)     <acpi/>
	I0912 21:55:55.896858   25697 main.go:141] libmachine: (ha-475401)     <apic/>
	I0912 21:55:55.896866   25697 main.go:141] libmachine: (ha-475401)     <pae/>
	I0912 21:55:55.896880   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.896892   25697 main.go:141] libmachine: (ha-475401)   </features>
	I0912 21:55:55.896898   25697 main.go:141] libmachine: (ha-475401)   <cpu mode='host-passthrough'>
	I0912 21:55:55.896904   25697 main.go:141] libmachine: (ha-475401)   
	I0912 21:55:55.896908   25697 main.go:141] libmachine: (ha-475401)   </cpu>
	I0912 21:55:55.896916   25697 main.go:141] libmachine: (ha-475401)   <os>
	I0912 21:55:55.896920   25697 main.go:141] libmachine: (ha-475401)     <type>hvm</type>
	I0912 21:55:55.896925   25697 main.go:141] libmachine: (ha-475401)     <boot dev='cdrom'/>
	I0912 21:55:55.896932   25697 main.go:141] libmachine: (ha-475401)     <boot dev='hd'/>
	I0912 21:55:55.896937   25697 main.go:141] libmachine: (ha-475401)     <bootmenu enable='no'/>
	I0912 21:55:55.896941   25697 main.go:141] libmachine: (ha-475401)   </os>
	I0912 21:55:55.896947   25697 main.go:141] libmachine: (ha-475401)   <devices>
	I0912 21:55:55.896955   25697 main.go:141] libmachine: (ha-475401)     <disk type='file' device='cdrom'>
	I0912 21:55:55.896972   25697 main.go:141] libmachine: (ha-475401)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/boot2docker.iso'/>
	I0912 21:55:55.896983   25697 main.go:141] libmachine: (ha-475401)       <target dev='hdc' bus='scsi'/>
	I0912 21:55:55.896992   25697 main.go:141] libmachine: (ha-475401)       <readonly/>
	I0912 21:55:55.897002   25697 main.go:141] libmachine: (ha-475401)     </disk>
	I0912 21:55:55.897011   25697 main.go:141] libmachine: (ha-475401)     <disk type='file' device='disk'>
	I0912 21:55:55.897027   25697 main.go:141] libmachine: (ha-475401)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:55:55.897037   25697 main.go:141] libmachine: (ha-475401)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/ha-475401.rawdisk'/>
	I0912 21:55:55.897045   25697 main.go:141] libmachine: (ha-475401)       <target dev='hda' bus='virtio'/>
	I0912 21:55:55.897050   25697 main.go:141] libmachine: (ha-475401)     </disk>
	I0912 21:55:55.897067   25697 main.go:141] libmachine: (ha-475401)     <interface type='network'>
	I0912 21:55:55.897081   25697 main.go:141] libmachine: (ha-475401)       <source network='mk-ha-475401'/>
	I0912 21:55:55.897092   25697 main.go:141] libmachine: (ha-475401)       <model type='virtio'/>
	I0912 21:55:55.897115   25697 main.go:141] libmachine: (ha-475401)     </interface>
	I0912 21:55:55.897133   25697 main.go:141] libmachine: (ha-475401)     <interface type='network'>
	I0912 21:55:55.897140   25697 main.go:141] libmachine: (ha-475401)       <source network='default'/>
	I0912 21:55:55.897151   25697 main.go:141] libmachine: (ha-475401)       <model type='virtio'/>
	I0912 21:55:55.897157   25697 main.go:141] libmachine: (ha-475401)     </interface>
	I0912 21:55:55.897165   25697 main.go:141] libmachine: (ha-475401)     <serial type='pty'>
	I0912 21:55:55.897171   25697 main.go:141] libmachine: (ha-475401)       <target port='0'/>
	I0912 21:55:55.897179   25697 main.go:141] libmachine: (ha-475401)     </serial>
	I0912 21:55:55.897184   25697 main.go:141] libmachine: (ha-475401)     <console type='pty'>
	I0912 21:55:55.897195   25697 main.go:141] libmachine: (ha-475401)       <target type='serial' port='0'/>
	I0912 21:55:55.897206   25697 main.go:141] libmachine: (ha-475401)     </console>
	I0912 21:55:55.897213   25697 main.go:141] libmachine: (ha-475401)     <rng model='virtio'>
	I0912 21:55:55.897219   25697 main.go:141] libmachine: (ha-475401)       <backend model='random'>/dev/random</backend>
	I0912 21:55:55.897226   25697 main.go:141] libmachine: (ha-475401)     </rng>
	I0912 21:55:55.897231   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.897237   25697 main.go:141] libmachine: (ha-475401)     
	I0912 21:55:55.897243   25697 main.go:141] libmachine: (ha-475401)   </devices>
	I0912 21:55:55.897249   25697 main.go:141] libmachine: (ha-475401) </domain>
	I0912 21:55:55.897256   25697 main.go:141] libmachine: (ha-475401) 
	I0912 21:55:55.901827   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:f0:76:08 in network default
	I0912 21:55:55.902319   25697 main.go:141] libmachine: (ha-475401) Ensuring networks are active...
	I0912 21:55:55.902338   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:55.902959   25697 main.go:141] libmachine: (ha-475401) Ensuring network default is active
	I0912 21:55:55.903259   25697 main.go:141] libmachine: (ha-475401) Ensuring network mk-ha-475401 is active
	I0912 21:55:55.903720   25697 main.go:141] libmachine: (ha-475401) Getting domain xml...
	I0912 21:55:55.904332   25697 main.go:141] libmachine: (ha-475401) Creating domain...
	I0912 21:55:57.113524   25697 main.go:141] libmachine: (ha-475401) Waiting to get IP...
	I0912 21:55:57.114495   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.114873   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.114899   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.114856   25720 retry.go:31] will retry after 262.380002ms: waiting for machine to come up
	I0912 21:55:57.379331   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.379828   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.379851   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.379794   25720 retry.go:31] will retry after 279.039082ms: waiting for machine to come up
	I0912 21:55:57.660446   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:57.660904   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:57.660932   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:57.660865   25720 retry.go:31] will retry after 433.166056ms: waiting for machine to come up
	I0912 21:55:58.095500   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:58.096032   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:58.096053   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:58.095974   25720 retry.go:31] will retry after 436.676456ms: waiting for machine to come up
	I0912 21:55:58.534685   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:58.535180   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:58.535217   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:58.535154   25720 retry.go:31] will retry after 488.410112ms: waiting for machine to come up
	I0912 21:55:59.024853   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:59.025250   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:59.025278   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:59.025201   25720 retry.go:31] will retry after 730.821904ms: waiting for machine to come up
	I0912 21:55:59.757171   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:55:59.757596   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:55:59.757650   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:55:59.757550   25720 retry.go:31] will retry after 816.928099ms: waiting for machine to come up
	I0912 21:56:00.576021   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:00.576382   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:00.576407   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:00.576341   25720 retry.go:31] will retry after 1.205724317s: waiting for machine to come up
	I0912 21:56:01.783914   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:01.784370   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:01.784396   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:01.784312   25720 retry.go:31] will retry after 1.666135319s: waiting for machine to come up
	I0912 21:56:03.451854   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:03.452343   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:03.452370   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:03.452304   25720 retry.go:31] will retry after 1.710937917s: waiting for machine to come up
	I0912 21:56:05.165203   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:05.165667   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:05.165694   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:05.165603   25720 retry.go:31] will retry after 2.153375797s: waiting for machine to come up
	I0912 21:56:07.321799   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:07.322124   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:07.322164   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:07.322099   25720 retry.go:31] will retry after 2.592804257s: waiting for machine to come up
	I0912 21:56:09.916015   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:09.916387   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:09.916418   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:09.916343   25720 retry.go:31] will retry after 3.777795698s: waiting for machine to come up
	I0912 21:56:13.695241   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:13.695702   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find current IP address of domain ha-475401 in network mk-ha-475401
	I0912 21:56:13.695725   25697 main.go:141] libmachine: (ha-475401) DBG | I0912 21:56:13.695621   25720 retry.go:31] will retry after 3.991415039s: waiting for machine to come up
	I0912 21:56:17.689719   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.690320   25697 main.go:141] libmachine: (ha-475401) Found IP for machine: 192.168.39.203
	I0912 21:56:17.690341   25697 main.go:141] libmachine: (ha-475401) Reserving static IP address...
	I0912 21:56:17.690355   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has current primary IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.690646   25697 main.go:141] libmachine: (ha-475401) DBG | unable to find host DHCP lease matching {name: "ha-475401", mac: "52:54:00:b0:0e:dd", ip: "192.168.39.203"} in network mk-ha-475401
	I0912 21:56:17.761650   25697 main.go:141] libmachine: (ha-475401) DBG | Getting to WaitForSSH function...
	I0912 21:56:17.761681   25697 main.go:141] libmachine: (ha-475401) Reserved static IP address: 192.168.39.203
	I0912 21:56:17.761695   25697 main.go:141] libmachine: (ha-475401) Waiting for SSH to be available...
	I0912 21:56:17.764659   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.765119   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:17.765151   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.765242   25697 main.go:141] libmachine: (ha-475401) DBG | Using SSH client type: external
	I0912 21:56:17.765270   25697 main.go:141] libmachine: (ha-475401) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa (-rw-------)
	I0912 21:56:17.765295   25697 main.go:141] libmachine: (ha-475401) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:56:17.765307   25697 main.go:141] libmachine: (ha-475401) DBG | About to run SSH command:
	I0912 21:56:17.765319   25697 main.go:141] libmachine: (ha-475401) DBG | exit 0
	I0912 21:56:17.889898   25697 main.go:141] libmachine: (ha-475401) DBG | SSH cmd err, output: <nil>: 
	I0912 21:56:17.890164   25697 main.go:141] libmachine: (ha-475401) KVM machine creation complete!
	I0912 21:56:17.890622   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:56:17.891193   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:17.891397   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:17.891566   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:56:17.891581   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:17.893036   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:56:17.893063   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:56:17.893070   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:56:17.893080   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:17.895504   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.895860   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:17.895890   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:17.896007   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:17.896183   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:17.896339   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:17.896572   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:17.896748   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:17.896959   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:17.896973   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:56:18.004899   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:56:18.004923   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:56:18.004931   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.008130   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.008539   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.008568   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.008798   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.009029   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.009242   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.009355   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.009569   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.009861   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.009880   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:56:18.118097   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:56:18.118211   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:56:18.118226   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:56:18.118236   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.118521   25697 buildroot.go:166] provisioning hostname "ha-475401"
	I0912 21:56:18.118548   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.118769   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.121122   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.121476   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.121505   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.121660   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.121818   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.121975   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.122088   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.122256   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.122463   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.122476   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401 && echo "ha-475401" | sudo tee /etc/hostname
	I0912 21:56:18.248698   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 21:56:18.248725   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.251454   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.251765   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.251786   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.251973   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.252154   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.252329   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.252497   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.252644   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.252816   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.252832   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:56:18.369721   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:56:18.369756   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:56:18.369794   25697 buildroot.go:174] setting up certificates
	I0912 21:56:18.369805   25697 provision.go:84] configureAuth start
	I0912 21:56:18.369816   25697 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 21:56:18.370109   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:18.372804   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.373272   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.373303   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.373416   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.377282   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.377764   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.377795   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.378056   25697 provision.go:143] copyHostCerts
	I0912 21:56:18.378090   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:56:18.378121   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:56:18.378134   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:56:18.378195   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:56:18.378287   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:56:18.378307   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:56:18.378311   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:56:18.378335   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:56:18.378390   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:56:18.378408   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:56:18.378412   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:56:18.378433   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:56:18.378491   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401 san=[127.0.0.1 192.168.39.203 ha-475401 localhost minikube]
	I0912 21:56:18.503588   25697 provision.go:177] copyRemoteCerts
	I0912 21:56:18.503653   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:56:18.503674   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.506606   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.506887   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.506908   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.507126   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.507375   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.507562   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.507700   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:18.591675   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:56:18.591741   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:56:18.614225   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:56:18.614329   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:56:18.636150   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:56:18.636239   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0912 21:56:18.658330   25697 provision.go:87] duration metric: took 288.489963ms to configureAuth
	I0912 21:56:18.658358   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:56:18.658525   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:18.658622   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.661238   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.661570   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.661600   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.661814   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.661997   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.662157   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.662318   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.662477   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:18.662692   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:18.662714   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:56:18.884522   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:56:18.884550   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:56:18.884561   25697 main.go:141] libmachine: (ha-475401) Calling .GetURL
	I0912 21:56:18.886145   25697 main.go:141] libmachine: (ha-475401) DBG | Using libvirt version 6000000
	I0912 21:56:18.888482   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.888916   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.888943   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.889114   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:56:18.889135   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:56:18.889152   25697 client.go:171] duration metric: took 23.375217506s to LocalClient.Create
	I0912 21:56:18.889184   25697 start.go:167] duration metric: took 23.375305381s to libmachine.API.Create "ha-475401"
	I0912 21:56:18.889198   25697 start.go:293] postStartSetup for "ha-475401" (driver="kvm2")
	I0912 21:56:18.889212   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:56:18.889234   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:18.889501   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:56:18.889524   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:18.891848   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.892303   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:18.892334   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:18.892459   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:18.892654   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:18.892828   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:18.893112   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:18.979832   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:56:18.983960   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:56:18.983990   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:56:18.984053   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:56:18.984147   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:56:18.984162   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:56:18.984280   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:56:18.993245   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:56:19.016592   25697 start.go:296] duration metric: took 127.381572ms for postStartSetup
	I0912 21:56:19.016651   25697 main.go:141] libmachine: (ha-475401) Calling .GetConfigRaw
	I0912 21:56:19.017231   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:19.020298   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.020704   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.020728   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.020995   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:19.021262   25697 start.go:128] duration metric: took 23.525094952s to createHost
	I0912 21:56:19.021294   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.023952   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.024332   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.024368   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.024520   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.024766   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.024953   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.025124   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.025289   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:56:19.025497   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 21:56:19.025523   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:56:19.138475   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178179.117074189
	
	I0912 21:56:19.138506   25697 fix.go:216] guest clock: 1726178179.117074189
	I0912 21:56:19.138518   25697 fix.go:229] Guest: 2024-09-12 21:56:19.117074189 +0000 UTC Remote: 2024-09-12 21:56:19.021282044 +0000 UTC m=+23.628297545 (delta=95.792145ms)
	I0912 21:56:19.138584   25697 fix.go:200] guest clock delta is within tolerance: 95.792145ms
	I0912 21:56:19.138591   25697 start.go:83] releasing machines lock for "ha-475401", held for 23.642533008s
	I0912 21:56:19.138626   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.138872   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:19.141330   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.141745   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.141768   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.141965   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142451   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142627   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:19.142760   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:56:19.142801   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.142865   25697 ssh_runner.go:195] Run: cat /version.json
	I0912 21:56:19.142887   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:19.145672   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.145757   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146060   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.146095   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146125   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:19.146140   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:19.146239   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.146334   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:19.146421   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.146482   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:19.146546   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.146618   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:19.146702   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:19.146771   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:19.255160   25697 ssh_runner.go:195] Run: systemctl --version
	I0912 21:56:19.261110   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:56:19.417919   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:56:19.423883   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:56:19.423963   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:56:19.439312   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:56:19.439340   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:56:19.439413   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:56:19.455027   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:56:19.468362   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:56:19.468439   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:56:19.482395   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:56:19.496342   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:56:19.608169   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:56:19.771980   25697 docker.go:233] disabling docker service ...
	I0912 21:56:19.772052   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:56:19.786300   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:56:19.799329   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:56:19.915146   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:56:20.029709   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:56:20.051008   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:56:20.069222   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:56:20.069292   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.079515   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:56:20.079599   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.089733   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.099928   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.110186   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:56:20.120471   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.130361   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.146228   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:56:20.156091   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:56:20.165021   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:56:20.165091   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:56:20.177851   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:56:20.187561   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:56:20.316412   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:56:20.400784   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:56:20.400876   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:56:20.405197   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:56:20.405263   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:56:20.408673   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:56:20.447077   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:56:20.447164   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:56:20.472518   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:56:20.500904   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:56:20.501965   25697 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 21:56:20.504348   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:20.504613   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:20.504628   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:20.504808   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:56:20.508675   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:56:20.520883   25697 kubeadm.go:883] updating cluster {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:56:20.521034   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:56:20.521110   25697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:56:20.555262   25697 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 21:56:20.555337   25697 ssh_runner.go:195] Run: which lz4
	I0912 21:56:20.559092   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0912 21:56:20.559236   25697 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 21:56:20.563193   25697 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:56:20.563233   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 21:56:21.749388   25697 crio.go:462] duration metric: took 1.190206408s to copy over tarball
	I0912 21:56:21.749464   25697 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:56:23.727146   25697 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977650394s)
	I0912 21:56:23.727182   25697 crio.go:469] duration metric: took 1.97776335s to extract the tarball
	I0912 21:56:23.727190   25697 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:56:23.763611   25697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:56:23.808502   25697 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 21:56:23.808525   25697 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:56:23.808533   25697 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0912 21:56:23.808655   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:56:23.808719   25697 ssh_runner.go:195] Run: crio config
	I0912 21:56:23.850903   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:56:23.850925   25697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 21:56:23.850942   25697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:56:23.850961   25697 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-475401 NodeName:ha-475401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:56:23.851097   25697 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-475401"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:56:23.851120   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:56:23.851178   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:56:23.866202   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:56:23.866308   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:56:23.866360   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:56:23.876752   25697 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:56:23.876825   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0912 21:56:23.886530   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0912 21:56:23.902835   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:56:23.918301   25697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0912 21:56:23.933717   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0912 21:56:23.949114   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:56:23.953193   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:56:23.964866   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:56:24.092552   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:56:24.109922   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.203
	I0912 21:56:24.109947   25697 certs.go:194] generating shared ca certs ...
	I0912 21:56:24.109971   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.110119   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:56:24.110164   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:56:24.110177   25697 certs.go:256] generating profile certs ...
	I0912 21:56:24.110250   25697 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:56:24.110269   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt with IP's: []
	I0912 21:56:24.345938   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt ...
	I0912 21:56:24.345968   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt: {Name:mka6c1e7d6609a21305a0e1773b35c84f55113cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.346132   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key ...
	I0912 21:56:24.346145   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key: {Name:mkf7e34e888e50ca221094327099d20bcce5f94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.346222   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b
	I0912 21:56:24.346237   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.254]
	I0912 21:56:24.417567   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b ...
	I0912 21:56:24.417598   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b: {Name:mke1d5796526bf531600b3509ec05f11a758e66f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.417758   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b ...
	I0912 21:56:24.417772   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b: {Name:mkfd06efc24218b09c0cad8fe026bed479b3b005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.417848   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.e13b779b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:56:24.417947   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.e13b779b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:56:24.418001   25697 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:56:24.418014   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt with IP's: []
	I0912 21:56:24.507416   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt ...
	I0912 21:56:24.507447   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt: {Name:mk5f451a7b7611f8daf526fb4007a4e6d7d89cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.507614   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key ...
	I0912 21:56:24.507625   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key: {Name:mkd2818606a639c6c5ea27f592bfaf6531f962fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:24.507694   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:56:24.507712   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:56:24.507723   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:56:24.507750   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:56:24.507769   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:56:24.507783   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:56:24.507795   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:56:24.507807   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:56:24.507867   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:56:24.507902   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:56:24.507912   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:56:24.507939   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:56:24.507962   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:56:24.507985   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:56:24.508022   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:56:24.508047   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.508061   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.508075   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.508688   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:56:24.533883   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:56:24.556190   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:56:24.578546   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:56:24.602390   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 21:56:24.624976   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:56:24.646839   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:56:24.669447   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:56:24.692696   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:56:24.715860   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:56:24.737405   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:56:24.759589   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:56:24.775992   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:56:24.781509   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:56:24.792384   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.796871   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.796939   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:56:24.802571   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:56:24.812962   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:56:24.823381   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.827617   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.827679   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:56:24.833219   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:56:24.844095   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:56:24.854896   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.859782   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.859834   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:56:24.869713   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:56:24.888503   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:56:24.897709   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:56:24.897769   25697 kubeadm.go:392] StartCluster: {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:56:24.897834   25697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:56:24.897904   25697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:56:24.938395   25697 cri.go:89] found id: ""
	I0912 21:56:24.938458   25697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:56:24.948312   25697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:56:24.957952   25697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:56:24.967400   25697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:56:24.967424   25697 kubeadm.go:157] found existing configuration files:
	
	I0912 21:56:24.967528   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:56:24.976891   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:56:24.976944   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:56:24.986394   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:56:24.995316   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:56:24.995386   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:56:25.004432   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:56:25.013241   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:56:25.013297   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:56:25.023452   25697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:56:25.032567   25697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:56:25.032619   25697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:56:25.041550   25697 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:56:25.138604   25697 kubeadm.go:310] W0912 21:56:25.122647     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:56:25.139554   25697 kubeadm.go:310] W0912 21:56:25.124009     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:56:25.242540   25697 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:56:37.159796   25697 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:56:37.159846   25697 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:56:37.159933   25697 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:56:37.160073   25697 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:56:37.160170   25697 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:56:37.160237   25697 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:56:37.161678   25697 out.go:235]   - Generating certificates and keys ...
	I0912 21:56:37.161750   25697 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:56:37.161820   25697 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:56:37.161907   25697 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:56:37.161973   25697 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:56:37.162059   25697 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:56:37.162140   25697 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:56:37.162212   25697 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:56:37.162358   25697 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-475401 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0912 21:56:37.162428   25697 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:56:37.162548   25697 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-475401 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I0912 21:56:37.162604   25697 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:56:37.162658   25697 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:56:37.162697   25697 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:56:37.162768   25697 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:56:37.162818   25697 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:56:37.162876   25697 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:56:37.162942   25697 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:56:37.163050   25697 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:56:37.163118   25697 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:56:37.163197   25697 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:56:37.163307   25697 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:56:37.165438   25697 out.go:235]   - Booting up control plane ...
	I0912 21:56:37.165516   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:56:37.165588   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:56:37.165666   25697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:56:37.165775   25697 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:56:37.165871   25697 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:56:37.165910   25697 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:56:37.166062   25697 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:56:37.166158   25697 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:56:37.166208   25697 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.472964ms
	I0912 21:56:37.166289   25697 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:56:37.166388   25697 kubeadm.go:310] [api-check] The API server is healthy after 6.056268017s
	I0912 21:56:37.166548   25697 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:56:37.166679   25697 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:56:37.166744   25697 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:56:37.166925   25697 kubeadm.go:310] [mark-control-plane] Marking the node ha-475401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:56:37.167014   25697 kubeadm.go:310] [bootstrap-token] Using token: wgjm90.cxyrn1xrd6ja5z7v
	I0912 21:56:37.168265   25697 out.go:235]   - Configuring RBAC rules ...
	I0912 21:56:37.168388   25697 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:56:37.168503   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:56:37.168701   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:56:37.168817   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:56:37.168920   25697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:56:37.169013   25697 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:56:37.169140   25697 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:56:37.169226   25697 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:56:37.169269   25697 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:56:37.169281   25697 kubeadm.go:310] 
	I0912 21:56:37.169345   25697 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:56:37.169351   25697 kubeadm.go:310] 
	I0912 21:56:37.169454   25697 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:56:37.169463   25697 kubeadm.go:310] 
	I0912 21:56:37.169503   25697 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:56:37.169604   25697 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:56:37.169693   25697 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:56:37.169703   25697 kubeadm.go:310] 
	I0912 21:56:37.169780   25697 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:56:37.169793   25697 kubeadm.go:310] 
	I0912 21:56:37.169863   25697 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:56:37.169872   25697 kubeadm.go:310] 
	I0912 21:56:37.169977   25697 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:56:37.170060   25697 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:56:37.170118   25697 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:56:37.170136   25697 kubeadm.go:310] 
	I0912 21:56:37.170233   25697 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:56:37.170438   25697 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:56:37.170452   25697 kubeadm.go:310] 
	I0912 21:56:37.170549   25697 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wgjm90.cxyrn1xrd6ja5z7v \
	I0912 21:56:37.170669   25697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 21:56:37.170712   25697 kubeadm.go:310] 	--control-plane 
	I0912 21:56:37.170720   25697 kubeadm.go:310] 
	I0912 21:56:37.170789   25697 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:56:37.170795   25697 kubeadm.go:310] 
	I0912 21:56:37.170860   25697 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wgjm90.cxyrn1xrd6ja5z7v \
	I0912 21:56:37.170974   25697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 21:56:37.170991   25697 cni.go:84] Creating CNI manager for ""
	I0912 21:56:37.170996   25697 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0912 21:56:37.172523   25697 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 21:56:37.173662   25697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 21:56:37.180682   25697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0912 21:56:37.180701   25697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0912 21:56:37.198637   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 21:56:37.563600   25697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:56:37.563674   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:37.563687   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401 minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=true
	I0912 21:56:37.709078   25697 ops.go:34] apiserver oom_adj: -16
	I0912 21:56:37.709172   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:38.209596   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:38.709514   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:39.210061   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:39.709424   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:40.209572   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:56:40.307756   25697 kubeadm.go:1113] duration metric: took 2.744148458s to wait for elevateKubeSystemPrivileges
	I0912 21:56:40.307800   25697 kubeadm.go:394] duration metric: took 15.410033831s to StartCluster
	I0912 21:56:40.307824   25697 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:40.307902   25697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:56:40.308574   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:56:40.308812   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:56:40.308815   25697 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:56:40.308838   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:56:40.308847   25697 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 21:56:40.308908   25697 addons.go:69] Setting storage-provisioner=true in profile "ha-475401"
	I0912 21:56:40.308919   25697 addons.go:69] Setting default-storageclass=true in profile "ha-475401"
	I0912 21:56:40.308942   25697 addons.go:234] Setting addon storage-provisioner=true in "ha-475401"
	I0912 21:56:40.308950   25697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-475401"
	I0912 21:56:40.308980   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:56:40.309024   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:40.309347   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.309348   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.309388   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.309398   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.325369   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0912 21:56:40.325412   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0912 21:56:40.325882   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.325936   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.326394   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.326414   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.326539   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.326563   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.326693   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.326913   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.327107   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.327279   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.327308   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.329288   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:56:40.329695   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 21:56:40.330205   25697 cert_rotation.go:140] Starting client certificate rotation controller
	I0912 21:56:40.330495   25697 addons.go:234] Setting addon default-storageclass=true in "ha-475401"
	I0912 21:56:40.330548   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:56:40.330917   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.330963   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.345620   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0912 21:56:40.346031   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.346355   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
	I0912 21:56:40.346496   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.346521   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.346859   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.346907   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.347412   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.347427   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:40.347444   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.347448   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:40.347819   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.348028   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.349807   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:40.352414   25697 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:56:40.354066   25697 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:56:40.354089   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:56:40.354111   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:40.357110   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.357588   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:40.357632   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.357802   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:40.357974   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:40.358148   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:40.358321   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:40.363190   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I0912 21:56:40.363621   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:40.364066   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:40.364081   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:40.364367   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:40.364557   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:56:40.366030   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:56:40.366220   25697 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:56:40.366236   25697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:56:40.366259   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:56:40.368757   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.369258   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:56:40.369288   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:56:40.369464   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:56:40.369672   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:56:40.369824   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:56:40.369975   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:56:40.418646   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:56:40.504443   25697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:56:40.553155   25697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:56:40.798821   25697 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 21:56:41.114951   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.114974   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115125   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115147   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115272   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115304   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115313   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115338   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115447   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115471   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115472   25697 main.go:141] libmachine: (ha-475401) DBG | Closing plugin on server side
	I0912 21:56:41.115486   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.115495   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.115589   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115633   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115695   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.115705   25697 main.go:141] libmachine: (ha-475401) DBG | Closing plugin on server side
	I0912 21:56:41.115709   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.115729   25697 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0912 21:56:41.115762   25697 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0912 21:56:41.115884   25697 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0912 21:56:41.115896   25697 round_trippers.go:469] Request Headers:
	I0912 21:56:41.115909   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:56:41.115916   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:56:41.130855   25697 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0912 21:56:41.131735   25697 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0912 21:56:41.131766   25697 round_trippers.go:469] Request Headers:
	I0912 21:56:41.131777   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:56:41.131790   25697 round_trippers.go:473]     Content-Type: application/json
	I0912 21:56:41.131795   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:56:41.136103   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:56:41.136234   25697 main.go:141] libmachine: Making call to close driver server
	I0912 21:56:41.136250   25697 main.go:141] libmachine: (ha-475401) Calling .Close
	I0912 21:56:41.136517   25697 main.go:141] libmachine: Successfully made call to close driver server
	I0912 21:56:41.136538   25697 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 21:56:41.138796   25697 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0912 21:56:41.140368   25697 addons.go:510] duration metric: took 831.516899ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0912 21:56:41.140403   25697 start.go:246] waiting for cluster config update ...
	I0912 21:56:41.140415   25697 start.go:255] writing updated cluster config ...
	I0912 21:56:41.142210   25697 out.go:201] 
	I0912 21:56:41.144842   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:56:41.144955   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:41.147500   25697 out.go:177] * Starting "ha-475401-m02" control-plane node in "ha-475401" cluster
	I0912 21:56:41.149381   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:56:41.149412   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:56:41.149504   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:56:41.149518   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:56:41.149596   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:56:41.150076   25697 start.go:360] acquireMachinesLock for ha-475401-m02: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:56:41.150139   25697 start.go:364] duration metric: took 29.116µs to acquireMachinesLock for "ha-475401-m02"
	I0912 21:56:41.150158   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:56:41.150240   25697 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0912 21:56:41.152484   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:56:41.152578   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:56:41.152601   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:56:41.168550   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0912 21:56:41.169110   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:56:41.169745   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:56:41.169770   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:56:41.170098   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:56:41.170301   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:56:41.170467   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:56:41.170676   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:56:41.170699   25697 client.go:168] LocalClient.Create starting
	I0912 21:56:41.170726   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:56:41.170756   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:56:41.170780   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:56:41.170829   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:56:41.170852   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:56:41.170864   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:56:41.170884   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:56:41.170892   25697 main.go:141] libmachine: (ha-475401-m02) Calling .PreCreateCheck
	I0912 21:56:41.171082   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:56:41.171444   25697 main.go:141] libmachine: Creating machine...
	I0912 21:56:41.171457   25697 main.go:141] libmachine: (ha-475401-m02) Calling .Create
	I0912 21:56:41.171601   25697 main.go:141] libmachine: (ha-475401-m02) Creating KVM machine...
	I0912 21:56:41.172840   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found existing default KVM network
	I0912 21:56:41.172966   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found existing private KVM network mk-ha-475401
	I0912 21:56:41.173214   25697 main.go:141] libmachine: (ha-475401-m02) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 ...
	I0912 21:56:41.173239   25697 main.go:141] libmachine: (ha-475401-m02) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:56:41.173286   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.173199   26067 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:56:41.173427   25697 main.go:141] libmachine: (ha-475401-m02) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:56:41.414393   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.414223   26067 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa...
	I0912 21:56:41.650672   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.650552   26067 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/ha-475401-m02.rawdisk...
	I0912 21:56:41.650714   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Writing magic tar header
	I0912 21:56:41.650735   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Writing SSH key tar header
	I0912 21:56:41.650746   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:41.650666   26067 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 ...
	I0912 21:56:41.650762   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02
	I0912 21:56:41.650822   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02 (perms=drwx------)
	I0912 21:56:41.650850   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:56:41.650860   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:56:41.650875   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:56:41.650889   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:56:41.650895   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:56:41.650902   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:56:41.650914   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:56:41.650926   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Checking permissions on dir: /home
	I0912 21:56:41.650940   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:56:41.650959   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Skipping /home - not owner
	I0912 21:56:41.650988   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:56:41.651006   25697 main.go:141] libmachine: (ha-475401-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:56:41.651020   25697 main.go:141] libmachine: (ha-475401-m02) Creating domain...
	I0912 21:56:41.652092   25697 main.go:141] libmachine: (ha-475401-m02) define libvirt domain using xml: 
	I0912 21:56:41.652119   25697 main.go:141] libmachine: (ha-475401-m02) <domain type='kvm'>
	I0912 21:56:41.652128   25697 main.go:141] libmachine: (ha-475401-m02)   <name>ha-475401-m02</name>
	I0912 21:56:41.652137   25697 main.go:141] libmachine: (ha-475401-m02)   <memory unit='MiB'>2200</memory>
	I0912 21:56:41.652147   25697 main.go:141] libmachine: (ha-475401-m02)   <vcpu>2</vcpu>
	I0912 21:56:41.652157   25697 main.go:141] libmachine: (ha-475401-m02)   <features>
	I0912 21:56:41.652169   25697 main.go:141] libmachine: (ha-475401-m02)     <acpi/>
	I0912 21:56:41.652179   25697 main.go:141] libmachine: (ha-475401-m02)     <apic/>
	I0912 21:56:41.652186   25697 main.go:141] libmachine: (ha-475401-m02)     <pae/>
	I0912 21:56:41.652200   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652233   25697 main.go:141] libmachine: (ha-475401-m02)   </features>
	I0912 21:56:41.652257   25697 main.go:141] libmachine: (ha-475401-m02)   <cpu mode='host-passthrough'>
	I0912 21:56:41.652268   25697 main.go:141] libmachine: (ha-475401-m02)   
	I0912 21:56:41.652283   25697 main.go:141] libmachine: (ha-475401-m02)   </cpu>
	I0912 21:56:41.652295   25697 main.go:141] libmachine: (ha-475401-m02)   <os>
	I0912 21:56:41.652309   25697 main.go:141] libmachine: (ha-475401-m02)     <type>hvm</type>
	I0912 21:56:41.652337   25697 main.go:141] libmachine: (ha-475401-m02)     <boot dev='cdrom'/>
	I0912 21:56:41.652348   25697 main.go:141] libmachine: (ha-475401-m02)     <boot dev='hd'/>
	I0912 21:56:41.652359   25697 main.go:141] libmachine: (ha-475401-m02)     <bootmenu enable='no'/>
	I0912 21:56:41.652370   25697 main.go:141] libmachine: (ha-475401-m02)   </os>
	I0912 21:56:41.652383   25697 main.go:141] libmachine: (ha-475401-m02)   <devices>
	I0912 21:56:41.652400   25697 main.go:141] libmachine: (ha-475401-m02)     <disk type='file' device='cdrom'>
	I0912 21:56:41.652417   25697 main.go:141] libmachine: (ha-475401-m02)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/boot2docker.iso'/>
	I0912 21:56:41.652429   25697 main.go:141] libmachine: (ha-475401-m02)       <target dev='hdc' bus='scsi'/>
	I0912 21:56:41.652442   25697 main.go:141] libmachine: (ha-475401-m02)       <readonly/>
	I0912 21:56:41.652452   25697 main.go:141] libmachine: (ha-475401-m02)     </disk>
	I0912 21:56:41.652476   25697 main.go:141] libmachine: (ha-475401-m02)     <disk type='file' device='disk'>
	I0912 21:56:41.652490   25697 main.go:141] libmachine: (ha-475401-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:56:41.652506   25697 main.go:141] libmachine: (ha-475401-m02)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/ha-475401-m02.rawdisk'/>
	I0912 21:56:41.652520   25697 main.go:141] libmachine: (ha-475401-m02)       <target dev='hda' bus='virtio'/>
	I0912 21:56:41.652532   25697 main.go:141] libmachine: (ha-475401-m02)     </disk>
	I0912 21:56:41.652547   25697 main.go:141] libmachine: (ha-475401-m02)     <interface type='network'>
	I0912 21:56:41.652560   25697 main.go:141] libmachine: (ha-475401-m02)       <source network='mk-ha-475401'/>
	I0912 21:56:41.652568   25697 main.go:141] libmachine: (ha-475401-m02)       <model type='virtio'/>
	I0912 21:56:41.652577   25697 main.go:141] libmachine: (ha-475401-m02)     </interface>
	I0912 21:56:41.652588   25697 main.go:141] libmachine: (ha-475401-m02)     <interface type='network'>
	I0912 21:56:41.652600   25697 main.go:141] libmachine: (ha-475401-m02)       <source network='default'/>
	I0912 21:56:41.652612   25697 main.go:141] libmachine: (ha-475401-m02)       <model type='virtio'/>
	I0912 21:56:41.652624   25697 main.go:141] libmachine: (ha-475401-m02)     </interface>
	I0912 21:56:41.652637   25697 main.go:141] libmachine: (ha-475401-m02)     <serial type='pty'>
	I0912 21:56:41.652648   25697 main.go:141] libmachine: (ha-475401-m02)       <target port='0'/>
	I0912 21:56:41.652655   25697 main.go:141] libmachine: (ha-475401-m02)     </serial>
	I0912 21:56:41.652666   25697 main.go:141] libmachine: (ha-475401-m02)     <console type='pty'>
	I0912 21:56:41.652673   25697 main.go:141] libmachine: (ha-475401-m02)       <target type='serial' port='0'/>
	I0912 21:56:41.652682   25697 main.go:141] libmachine: (ha-475401-m02)     </console>
	I0912 21:56:41.652690   25697 main.go:141] libmachine: (ha-475401-m02)     <rng model='virtio'>
	I0912 21:56:41.652701   25697 main.go:141] libmachine: (ha-475401-m02)       <backend model='random'>/dev/random</backend>
	I0912 21:56:41.652710   25697 main.go:141] libmachine: (ha-475401-m02)     </rng>
	I0912 21:56:41.652718   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652727   25697 main.go:141] libmachine: (ha-475401-m02)     
	I0912 21:56:41.652744   25697 main.go:141] libmachine: (ha-475401-m02)   </devices>
	I0912 21:56:41.652780   25697 main.go:141] libmachine: (ha-475401-m02) </domain>
	I0912 21:56:41.652794   25697 main.go:141] libmachine: (ha-475401-m02) 
	I0912 21:56:41.659649   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:68:a7:8c in network default
	I0912 21:56:41.660258   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring networks are active...
	I0912 21:56:41.660286   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:41.661071   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring network default is active
	I0912 21:56:41.661395   25697 main.go:141] libmachine: (ha-475401-m02) Ensuring network mk-ha-475401 is active
	I0912 21:56:41.661807   25697 main.go:141] libmachine: (ha-475401-m02) Getting domain xml...
	I0912 21:56:41.662483   25697 main.go:141] libmachine: (ha-475401-m02) Creating domain...
	I0912 21:56:42.897026   25697 main.go:141] libmachine: (ha-475401-m02) Waiting to get IP...
	I0912 21:56:42.897711   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:42.898094   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:42.898125   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:42.898073   26067 retry.go:31] will retry after 217.420058ms: waiting for machine to come up
	I0912 21:56:43.117730   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.118102   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.118124   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.118079   26067 retry.go:31] will retry after 330.585414ms: waiting for machine to come up
	I0912 21:56:43.450571   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.451040   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.451079   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.451003   26067 retry.go:31] will retry after 473.887606ms: waiting for machine to come up
	I0912 21:56:43.926694   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:43.927123   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:43.927142   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:43.927090   26067 retry.go:31] will retry after 484.6682ms: waiting for machine to come up
	I0912 21:56:44.413947   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:44.414506   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:44.414530   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:44.414427   26067 retry.go:31] will retry after 570.000136ms: waiting for machine to come up
	I0912 21:56:44.986462   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:44.986909   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:44.986936   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:44.986849   26067 retry.go:31] will retry after 947.956296ms: waiting for machine to come up
	I0912 21:56:45.936372   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:45.936840   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:45.936867   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:45.936791   26067 retry.go:31] will retry after 1.161491429s: waiting for machine to come up
	I0912 21:56:47.099618   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:47.100130   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:47.100155   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:47.100079   26067 retry.go:31] will retry after 1.237357696s: waiting for machine to come up
	I0912 21:56:48.338682   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:48.339181   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:48.339211   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:48.339138   26067 retry.go:31] will retry after 1.321851998s: waiting for machine to come up
	I0912 21:56:49.662997   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:49.663569   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:49.663593   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:49.663528   26067 retry.go:31] will retry after 1.931867868s: waiting for machine to come up
	I0912 21:56:51.596580   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:51.597156   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:51.597293   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:51.596987   26067 retry.go:31] will retry after 2.691762052s: waiting for machine to come up
	I0912 21:56:54.291916   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:54.292477   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:54.292506   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:54.292427   26067 retry.go:31] will retry after 3.403416956s: waiting for machine to come up
	I0912 21:56:57.698211   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:56:57.698615   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find current IP address of domain ha-475401-m02 in network mk-ha-475401
	I0912 21:56:57.698643   25697 main.go:141] libmachine: (ha-475401-m02) DBG | I0912 21:56:57.698559   26067 retry.go:31] will retry after 3.117356745s: waiting for machine to come up
	I0912 21:57:00.819759   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.820426   25697 main.go:141] libmachine: (ha-475401-m02) Found IP for machine: 192.168.39.222
	I0912 21:57:00.820456   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has current primary IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.820465   25697 main.go:141] libmachine: (ha-475401-m02) Reserving static IP address...
	I0912 21:57:00.820919   25697 main.go:141] libmachine: (ha-475401-m02) DBG | unable to find host DHCP lease matching {name: "ha-475401-m02", mac: "52:54:00:ad:31:3a", ip: "192.168.39.222"} in network mk-ha-475401
	I0912 21:57:00.896337   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Getting to WaitForSSH function...
	I0912 21:57:00.896391   25697 main.go:141] libmachine: (ha-475401-m02) Reserved static IP address: 192.168.39.222
	I0912 21:57:00.896405   25697 main.go:141] libmachine: (ha-475401-m02) Waiting for SSH to be available...
	I0912 21:57:00.899059   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.899473   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:00.899499   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:00.899660   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using SSH client type: external
	I0912 21:57:00.899687   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa (-rw-------)
	I0912 21:57:00.899720   25697 main.go:141] libmachine: (ha-475401-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:57:00.899728   25697 main.go:141] libmachine: (ha-475401-m02) DBG | About to run SSH command:
	I0912 21:57:00.899740   25697 main.go:141] libmachine: (ha-475401-m02) DBG | exit 0
	I0912 21:57:01.021499   25697 main.go:141] libmachine: (ha-475401-m02) DBG | SSH cmd err, output: <nil>: 
	I0912 21:57:01.021784   25697 main.go:141] libmachine: (ha-475401-m02) KVM machine creation complete!
	I0912 21:57:01.022111   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:57:01.022647   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:01.022828   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:01.022982   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:57:01.022995   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 21:57:01.024319   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:57:01.024333   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:57:01.024343   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:57:01.024351   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.027044   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.027459   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.027491   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.027621   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.027808   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.027951   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.028202   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.028403   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.028591   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.028601   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:57:01.128848   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:57:01.128868   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:57:01.128876   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.131443   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.131751   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.131781   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.131911   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.132097   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.132261   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.132399   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.132547   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.132786   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.132802   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:57:01.234033   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:57:01.234093   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:57:01.234102   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:57:01.234111   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.234532   25697 buildroot.go:166] provisioning hostname "ha-475401-m02"
	I0912 21:57:01.234563   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.234770   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.237526   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.237885   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.237913   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.238069   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.238252   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.238432   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.238559   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.238719   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.238945   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.238962   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401-m02 && echo "ha-475401-m02" | sudo tee /etc/hostname
	I0912 21:57:01.356940   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401-m02
	
	I0912 21:57:01.356962   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.360119   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.360549   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.360576   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.360776   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.360977   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.361130   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.361260   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.361437   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.361755   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.361788   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:57:01.474502   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:57:01.474531   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:57:01.474548   25697 buildroot.go:174] setting up certificates
	I0912 21:57:01.474558   25697 provision.go:84] configureAuth start
	I0912 21:57:01.474568   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetMachineName
	I0912 21:57:01.474846   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:01.477830   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.478312   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.478345   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.478493   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.481300   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.481744   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.481775   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.481967   25697 provision.go:143] copyHostCerts
	I0912 21:57:01.481995   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:57:01.482023   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:57:01.482033   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:57:01.482116   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:57:01.482210   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:57:01.482233   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:57:01.482242   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:57:01.482282   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:57:01.482385   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:57:01.482422   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:57:01.482433   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:57:01.482473   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:57:01.482538   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401-m02 san=[127.0.0.1 192.168.39.222 ha-475401-m02 localhost minikube]
	I0912 21:57:01.677785   25697 provision.go:177] copyRemoteCerts
	I0912 21:57:01.677843   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:57:01.677865   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.680375   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.680698   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.680726   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.680918   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.681118   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.681278   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.681435   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:01.763387   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:57:01.763463   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:57:01.786565   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:57:01.786649   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:57:01.810853   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:57:01.810938   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:57:01.833609   25697 provision.go:87] duration metric: took 359.040045ms to configureAuth
	I0912 21:57:01.833652   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:57:01.833847   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:01.833966   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:01.836717   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.837102   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:01.837133   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:01.837309   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:01.837554   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.837721   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:01.837885   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:01.838049   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:01.838242   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:01.838263   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:57:02.057850   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:57:02.057886   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:57:02.057897   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetURL
	I0912 21:57:02.059171   25697 main.go:141] libmachine: (ha-475401-m02) DBG | Using libvirt version 6000000
	I0912 21:57:02.061315   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.061692   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.061722   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.061848   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:57:02.061867   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:57:02.061875   25697 client.go:171] duration metric: took 20.89116902s to LocalClient.Create
	I0912 21:57:02.061904   25697 start.go:167] duration metric: took 20.891228134s to libmachine.API.Create "ha-475401"
	I0912 21:57:02.061918   25697 start.go:293] postStartSetup for "ha-475401-m02" (driver="kvm2")
	I0912 21:57:02.061931   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:57:02.061972   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.062221   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:57:02.062252   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.064772   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.065172   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.065200   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.065317   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.065526   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.065724   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.065954   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.148007   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:57:02.152089   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:57:02.152114   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:57:02.152194   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:57:02.152264   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:57:02.152273   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:57:02.152362   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:57:02.161651   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:57:02.185045   25697 start.go:296] duration metric: took 123.111258ms for postStartSetup
	I0912 21:57:02.185107   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetConfigRaw
	I0912 21:57:02.185944   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:02.188845   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.189323   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.189349   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.189669   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:02.189901   25697 start.go:128] duration metric: took 21.039650208s to createHost
	I0912 21:57:02.189932   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.192197   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.192685   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.192713   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.192886   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.193095   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.193268   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.193420   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.193586   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:57:02.193780   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0912 21:57:02.193793   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:57:02.297929   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178222.259138843
	
	I0912 21:57:02.297956   25697 fix.go:216] guest clock: 1726178222.259138843
	I0912 21:57:02.297976   25697 fix.go:229] Guest: 2024-09-12 21:57:02.259138843 +0000 UTC Remote: 2024-09-12 21:57:02.18991842 +0000 UTC m=+66.796933930 (delta=69.220423ms)
	I0912 21:57:02.298002   25697 fix.go:200] guest clock delta is within tolerance: 69.220423ms
	I0912 21:57:02.298009   25697 start.go:83] releasing machines lock for "ha-475401-m02", held for 21.147859148s
	I0912 21:57:02.298040   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.298310   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:02.301169   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.301574   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.301605   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.303680   25697 out.go:177] * Found network options:
	I0912 21:57:02.304732   25697 out.go:177]   - NO_PROXY=192.168.39.203
	W0912 21:57:02.305654   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:57:02.305679   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306187   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306366   25697 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 21:57:02.306456   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:57:02.306494   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	W0912 21:57:02.306580   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:57:02.306665   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:57:02.306689   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 21:57:02.309209   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309389   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309562   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.309595   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309698   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.309864   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:02.309887   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.309892   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:02.309986   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 21:57:02.310055   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.310154   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 21:57:02.310264   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 21:57:02.310268   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.310469   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 21:57:02.533536   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:57:02.541876   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:57:02.541936   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:57:02.557398   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:57:02.557440   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:57:02.557514   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:57:02.576591   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:57:02.593730   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:57:02.593803   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:57:02.610020   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:57:02.628187   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:57:02.766943   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:57:02.932626   25697 docker.go:233] disabling docker service ...
	I0912 21:57:02.932685   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:57:02.946722   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:57:02.959680   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:57:03.085801   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:57:03.211950   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:57:03.224755   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:57:03.241879   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:57:03.241948   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.251810   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:57:03.251876   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.262573   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.273089   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.283322   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:57:03.293496   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.304580   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.321868   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:57:03.332457   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:57:03.342938   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:57:03.343001   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:57:03.354986   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:57:03.365096   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:03.487874   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:57:03.584656   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:57:03.584724   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:57:03.591205   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:57:03.591274   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:57:03.595283   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:57:03.632020   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:57:03.632105   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:57:03.659839   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:57:03.689535   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:57:03.690747   25697 out.go:177]   - env NO_PROXY=192.168.39.203
	I0912 21:57:03.691759   25697 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 21:57:03.694337   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:03.694692   25697 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:55 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 21:57:03.694717   25697 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 21:57:03.695027   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:57:03.698979   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:57:03.712051   25697 mustload.go:65] Loading cluster: ha-475401
	I0912 21:57:03.712303   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:03.712566   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:03.712592   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:03.726938   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0912 21:57:03.727353   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:03.727820   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:03.727835   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:03.728158   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:03.728354   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:57:03.730112   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:57:03.730449   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:03.730481   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:03.744800   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0912 21:57:03.745195   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:03.745637   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:03.745660   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:03.745972   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:03.746177   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:57:03.746442   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.222
	I0912 21:57:03.746459   25697 certs.go:194] generating shared ca certs ...
	I0912 21:57:03.746476   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.746621   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:57:03.746684   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:57:03.746697   25697 certs.go:256] generating profile certs ...
	I0912 21:57:03.746791   25697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:57:03.746821   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998
	I0912 21:57:03.746833   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.254]
	I0912 21:57:03.895425   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 ...
	I0912 21:57:03.895452   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998: {Name:mk2a12f91c910d3f115f9f1364d04711b2cb2665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.895639   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998 ...
	I0912 21:57:03.895660   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998: {Name:mk196ca5f3a89070abdf1cfc1ff4bafff02be87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:57:03.895752   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.8675e998 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:57:03.895903   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.8675e998 -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:57:03.896068   25697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:57:03.896087   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:57:03.896105   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:57:03.896124   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:57:03.896142   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:57:03.896164   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:57:03.896185   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:57:03.896202   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:57:03.896220   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:57:03.896277   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:57:03.896313   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:57:03.896327   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:57:03.896366   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:57:03.896397   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:57:03.896432   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:57:03.896486   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:57:03.896522   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:57:03.896542   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:03.896559   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:57:03.896596   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:57:03.899717   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:03.900035   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:57:03.900059   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:03.900183   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:57:03.900390   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:57:03.900571   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:57:03.900698   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:57:03.974008   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0912 21:57:03.979459   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0912 21:57:03.990042   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0912 21:57:03.994120   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0912 21:57:04.004288   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0912 21:57:04.008162   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0912 21:57:04.018126   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0912 21:57:04.022032   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0912 21:57:04.031919   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0912 21:57:04.035907   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0912 21:57:04.046398   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0912 21:57:04.050375   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0912 21:57:04.060540   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:57:04.085043   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:57:04.108605   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:57:04.132995   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:57:04.157865   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0912 21:57:04.182540   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:57:04.206114   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:57:04.229754   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:57:04.253438   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:57:04.277210   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:57:04.301244   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:57:04.327386   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0912 21:57:04.344517   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0912 21:57:04.361799   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0912 21:57:04.377630   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0912 21:57:04.394094   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0912 21:57:04.410872   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0912 21:57:04.426569   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0912 21:57:04.442369   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:57:04.447699   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:57:04.458193   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.462315   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.462376   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:57:04.467859   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:57:04.479011   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:57:04.489740   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.494133   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.494196   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:57:04.500017   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:57:04.512533   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:57:04.525896   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.530625   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.530686   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:57:04.536234   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:57:04.546650   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:57:04.551285   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:57:04.551332   25697 kubeadm.go:934] updating node {m02 192.168.39.222 8443 v1.31.1 crio true true} ...
	I0912 21:57:04.551424   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:57:04.551448   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:57:04.551481   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:57:04.570344   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:57:04.570406   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:57:04.570459   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:57:04.581554   25697 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0912 21:57:04.581631   25697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0912 21:57:04.592474   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0912 21:57:04.592505   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:57:04.592553   25697 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0912 21:57:04.592586   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:57:04.592609   25697 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0912 21:57:04.596858   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0912 21:57:04.596892   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0912 21:57:05.686519   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:57:05.701145   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:57:05.701235   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:57:05.705856   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0912 21:57:05.705885   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0912 21:57:06.107758   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:57:06.107829   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:57:06.112751   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0912 21:57:06.112793   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0912 21:57:06.355603   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0912 21:57:06.364464   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0912 21:57:06.380238   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:57:06.396012   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 21:57:06.412331   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:57:06.416180   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:57:06.428760   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:06.551324   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:57:06.567711   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:57:06.568062   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:06.568090   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:06.583279   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
	I0912 21:57:06.583771   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:06.584254   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:06.584277   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:06.584594   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:06.584807   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:57:06.584969   25697 start.go:317] joinCluster: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:57:06.585063   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0912 21:57:06.585078   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:57:06.588502   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:06.588985   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:57:06.589024   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:57:06.589204   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:57:06.589401   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:57:06.589570   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:57:06.589742   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:57:06.746389   25697 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:06.746432   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c37vl9.mq5q1jgfq9gk00ux --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443"
	I0912 21:57:28.657053   25697 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c37vl9.mq5q1jgfq9gk00ux --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m02 --control-plane --apiserver-advertise-address=192.168.39.222 --apiserver-bind-port=8443": (21.910594329s)
	I0912 21:57:28.657091   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0912 21:57:29.215977   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401-m02 minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=false
	I0912 21:57:29.341536   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-475401-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0912 21:57:29.458668   25697 start.go:319] duration metric: took 22.873693207s to joinCluster
	I0912 21:57:29.458747   25697 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:29.459041   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:29.460132   25697 out.go:177] * Verifying Kubernetes components...
	I0912 21:57:29.461469   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:57:29.787500   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:57:29.822464   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:57:29.822795   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0912 21:57:29.822874   25697 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.203:8443
	I0912 21:57:29.823186   25697 node_ready.go:35] waiting up to 6m0s for node "ha-475401-m02" to be "Ready" ...
	I0912 21:57:29.823295   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:29.823306   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:29.823317   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:29.823324   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:29.833517   25697 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0912 21:57:30.324064   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:30.324115   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:30.324128   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:30.324132   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:30.332420   25697 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0912 21:57:30.823976   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:30.824002   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:30.824014   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:30.824022   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:30.829128   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:57:31.324376   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:31.324402   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:31.324409   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:31.324413   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:31.327769   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:31.823392   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:31.823414   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:31.823434   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:31.823437   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:31.826701   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:31.827130   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:32.323505   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:32.323526   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:32.323533   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:32.323536   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:32.326546   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:32.823471   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:32.823544   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:32.823562   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:32.823572   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:32.827544   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:33.323437   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:33.323459   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:33.323467   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:33.323469   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:33.326900   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:33.824361   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:33.824394   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:33.824406   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:33.824410   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:33.834947   25697 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0912 21:57:33.835794   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:34.323728   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:34.323755   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:34.323767   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:34.323775   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:34.326981   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:34.824025   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:34.824051   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:34.824062   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:34.824070   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:34.827392   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:35.323421   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:35.323449   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:35.323460   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:35.323466   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:35.326841   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:35.824036   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:35.824059   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:35.824070   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:35.824076   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:35.827441   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:36.323766   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:36.323791   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:36.323800   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:36.323805   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:36.482302   25697 round_trippers.go:574] Response Status: 200 OK in 158 milliseconds
	I0912 21:57:36.482871   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:36.823392   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:36.823420   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:36.823432   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:36.823437   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:36.826336   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:37.324382   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:37.324411   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:37.324421   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:37.324425   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:37.327434   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:37.823377   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:37.823401   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:37.823429   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:37.823436   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:37.827078   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.324234   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:38.324258   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:38.324266   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:38.324272   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:38.328320   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.823978   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:38.824005   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:38.824017   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:38.824022   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:38.827274   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:38.827691   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:39.324159   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:39.324187   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:39.324199   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:39.324208   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:39.327462   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:39.823473   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:39.823496   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:39.823501   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:39.823506   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:39.827008   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.323736   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:40.323764   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:40.323772   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:40.323776   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:40.326901   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.823867   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:40.823896   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:40.823904   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:40.823907   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:40.827569   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:40.828061   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:41.323504   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:41.323528   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:41.323538   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:41.323542   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:41.326788   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:41.824174   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:41.824197   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:41.824204   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:41.824208   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:41.827525   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:42.324366   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:42.324391   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:42.324401   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:42.324408   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:42.328824   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:42.824063   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:42.824086   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:42.824094   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:42.824099   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:42.826890   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:43.323826   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:43.323849   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:43.323858   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:43.323863   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:43.327133   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:43.327637   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:43.824000   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:43.824024   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:43.824031   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:43.824035   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:43.827390   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:44.323365   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:44.323388   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:44.323394   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:44.323397   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:44.327224   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:44.824198   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:44.824220   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:44.824230   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:44.824234   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:44.828384   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:45.324372   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:45.324400   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:45.324410   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:45.324416   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:45.327948   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:45.328685   25697 node_ready.go:53] node "ha-475401-m02" has status "Ready":"False"
	I0912 21:57:45.824183   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:45.824212   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:45.824227   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:45.824232   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:45.827918   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:46.324352   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:46.324373   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:46.324381   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:46.324384   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:46.328046   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:46.823401   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:46.823428   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:46.823436   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:46.823440   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:46.826851   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.323570   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.323598   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.323609   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.323617   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.326794   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.823591   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.823616   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.823625   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.823629   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.826970   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:47.827787   25697 node_ready.go:49] node "ha-475401-m02" has status "Ready":"True"
	I0912 21:57:47.827807   25697 node_ready.go:38] duration metric: took 18.004595935s for node "ha-475401-m02" to be "Ready" ...
	I0912 21:57:47.827817   25697 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:57:47.827891   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:47.827902   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.827912   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.827920   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.832287   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:47.838612   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.838684   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pzsv8
	I0912 21:57:47.838693   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.838700   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.838704   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.841359   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.841979   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.841995   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.842002   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.842007   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.844385   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.844960   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.844981   25697 pod_ready.go:82] duration metric: took 6.34685ms for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.844994   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.845046   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xhdj7
	I0912 21:57:47.845053   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.845060   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.845065   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.847572   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.848298   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.848317   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.848344   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.848349   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.850691   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.851203   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.851224   25697 pod_ready.go:82] duration metric: took 6.218717ms for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.851237   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.851294   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401
	I0912 21:57:47.851308   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.851318   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.851341   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.853481   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.854113   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:47.854130   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.854140   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.854145   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.856201   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.856694   25697 pod_ready.go:93] pod "etcd-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.856712   25697 pod_ready.go:82] duration metric: took 5.468365ms for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.856722   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.856769   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m02
	I0912 21:57:47.856779   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.856786   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.856791   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.859024   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.859583   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:47.859596   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:47.859603   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:47.859608   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:47.861874   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:47.862378   25697 pod_ready.go:93] pod "etcd-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:47.862395   25697 pod_ready.go:82] duration metric: took 5.666663ms for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:47.862409   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.023695   25697 request.go:632] Waited for 161.233751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:57:48.023764   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:57:48.023770   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.023778   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.023783   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.026897   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.223840   25697 request.go:632] Waited for 196.314299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:48.223905   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:48.223909   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.223916   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.223920   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.226979   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.227551   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:48.227577   25697 pod_ready.go:82] duration metric: took 365.161357ms for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.227587   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.424610   25697 request.go:632] Waited for 196.950952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:57:48.424700   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:57:48.424709   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.424720   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.424730   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.428368   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.624374   25697 request.go:632] Waited for 195.389533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:48.624435   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:48.624440   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.624447   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.624452   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.627638   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:48.628242   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:48.628263   25697 pod_ready.go:82] duration metric: took 400.668927ms for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.628272   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:48.824119   25697 request.go:632] Waited for 195.789443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:57:48.824187   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:57:48.824193   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:48.824202   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:48.824207   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:48.827466   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.024475   25697 request.go:632] Waited for 196.3798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.024522   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.024527   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.024534   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.024539   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.027875   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.028452   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.028471   25697 pod_ready.go:82] duration metric: took 400.192567ms for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.028479   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.224430   25697 request.go:632] Waited for 195.868752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:57:49.224506   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:57:49.224524   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.224535   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.224543   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.228098   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.424118   25697 request.go:632] Waited for 195.345067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:49.424187   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:49.424193   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.424200   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.424204   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.427270   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.427807   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.427825   25697 pod_ready.go:82] duration metric: took 399.339766ms for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.427834   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.623957   25697 request.go:632] Waited for 196.060695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:57:49.624034   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:57:49.624048   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.624057   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.624062   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.626942   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:57:49.823782   25697 request.go:632] Waited for 196.256746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.823834   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:49.823840   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:49.823846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:49.823851   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:49.827823   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:49.828409   25697 pod_ready.go:93] pod "kube-proxy-4bk97" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:49.828426   25697 pod_ready.go:82] duration metric: took 400.586426ms for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:49.828436   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.024520   25697 request.go:632] Waited for 196.02544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:57:50.024577   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:57:50.024582   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.024589   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.024604   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.028066   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.224042   25697 request.go:632] Waited for 195.348875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:50.224120   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:50.224126   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.224132   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.224135   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.227651   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.228099   25697 pod_ready.go:93] pod "kube-proxy-68h98" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:50.228119   25697 pod_ready.go:82] duration metric: took 399.676133ms for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.228129   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.424324   25697 request.go:632] Waited for 196.110611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:57:50.424387   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:57:50.424393   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.424400   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.424406   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.428055   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.624133   25697 request.go:632] Waited for 195.389452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:50.624189   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:57:50.624195   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.624202   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.624205   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.627552   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:50.628154   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:50.628174   25697 pod_ready.go:82] duration metric: took 400.036802ms for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.628188   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:50.824225   25697 request.go:632] Waited for 195.956305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:57:50.824304   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:57:50.824310   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:50.824318   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:50.824323   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:50.827545   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.024479   25697 request.go:632] Waited for 196.355742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:51.024536   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:57:51.024543   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.024554   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.024560   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.027674   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.028271   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:57:51.028290   25697 pod_ready.go:82] duration metric: took 400.093807ms for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:57:51.028304   25697 pod_ready.go:39] duration metric: took 3.200473413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:57:51.028333   25697 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:57:51.028397   25697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:57:51.043159   25697 api_server.go:72] duration metric: took 21.584379256s to wait for apiserver process to appear ...
	I0912 21:57:51.043180   25697 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:57:51.043199   25697 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0912 21:57:51.047434   25697 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0912 21:57:51.047492   25697 round_trippers.go:463] GET https://192.168.39.203:8443/version
	I0912 21:57:51.047498   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.047505   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.047511   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.048504   25697 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 21:57:51.048587   25697 api_server.go:141] control plane version: v1.31.1
	I0912 21:57:51.048602   25697 api_server.go:131] duration metric: took 5.41647ms to wait for apiserver health ...
	I0912 21:57:51.048610   25697 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:57:51.224407   25697 request.go:632] Waited for 175.739293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.224462   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.224477   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.224497   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.224504   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.229164   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:51.233148   25697 system_pods.go:59] 17 kube-system pods found
	I0912 21:57:51.233176   25697 system_pods.go:61] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:57:51.233181   25697 system_pods.go:61] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:57:51.233185   25697 system_pods.go:61] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:57:51.233189   25697 system_pods.go:61] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:57:51.233192   25697 system_pods.go:61] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:57:51.233195   25697 system_pods.go:61] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:57:51.233198   25697 system_pods.go:61] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:57:51.233202   25697 system_pods.go:61] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:57:51.233208   25697 system_pods.go:61] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:57:51.233214   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:57:51.233217   25697 system_pods.go:61] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:57:51.233222   25697 system_pods.go:61] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:57:51.233226   25697 system_pods.go:61] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:57:51.233229   25697 system_pods.go:61] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:57:51.233232   25697 system_pods.go:61] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:57:51.233235   25697 system_pods.go:61] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:57:51.233238   25697 system_pods.go:61] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:57:51.233243   25697 system_pods.go:74] duration metric: took 184.628871ms to wait for pod list to return data ...
	I0912 21:57:51.233253   25697 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:57:51.424651   25697 request.go:632] Waited for 191.329327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:57:51.424709   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:57:51.424716   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.424723   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.424729   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.428062   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.428262   25697 default_sa.go:45] found service account: "default"
	I0912 21:57:51.428276   25697 default_sa.go:55] duration metric: took 195.017428ms for default service account to be created ...
	I0912 21:57:51.428283   25697 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:57:51.623916   25697 request.go:632] Waited for 195.558331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.623972   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:57:51.623980   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.623989   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.623994   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.628142   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:57:51.632305   25697 system_pods.go:86] 17 kube-system pods found
	I0912 21:57:51.632338   25697 system_pods.go:89] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:57:51.632346   25697 system_pods.go:89] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:57:51.632353   25697 system_pods.go:89] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:57:51.632358   25697 system_pods.go:89] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:57:51.632364   25697 system_pods.go:89] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:57:51.632369   25697 system_pods.go:89] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:57:51.632375   25697 system_pods.go:89] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:57:51.632381   25697 system_pods.go:89] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:57:51.632388   25697 system_pods.go:89] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:57:51.632395   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:57:51.632404   25697 system_pods.go:89] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:57:51.632411   25697 system_pods.go:89] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:57:51.632417   25697 system_pods.go:89] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:57:51.632423   25697 system_pods.go:89] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:57:51.632429   25697 system_pods.go:89] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:57:51.632437   25697 system_pods.go:89] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:57:51.632444   25697 system_pods.go:89] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:57:51.632453   25697 system_pods.go:126] duration metric: took 204.164222ms to wait for k8s-apps to be running ...
	I0912 21:57:51.632462   25697 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:57:51.632512   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:57:51.647575   25697 system_svc.go:56] duration metric: took 15.104684ms WaitForService to wait for kubelet
	I0912 21:57:51.647624   25697 kubeadm.go:582] duration metric: took 22.188845767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:57:51.647646   25697 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:57:51.824082   25697 request.go:632] Waited for 176.361682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes
	I0912 21:57:51.824148   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes
	I0912 21:57:51.824154   25697 round_trippers.go:469] Request Headers:
	I0912 21:57:51.824161   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:57:51.824165   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:57:51.827548   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:57:51.828398   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:57:51.828423   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:57:51.828435   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:57:51.828438   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:57:51.828443   25697 node_conditions.go:105] duration metric: took 180.791468ms to run NodePressure ...
	I0912 21:57:51.828454   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:57:51.828475   25697 start.go:255] writing updated cluster config ...
	I0912 21:57:51.830711   25697 out.go:201] 
	I0912 21:57:51.832815   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:57:51.832998   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:51.834854   25697 out.go:177] * Starting "ha-475401-m03" control-plane node in "ha-475401" cluster
	I0912 21:57:51.835855   25697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:57:51.835876   25697 cache.go:56] Caching tarball of preloaded images
	I0912 21:57:51.835962   25697 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:57:51.835972   25697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 21:57:51.836050   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:57:51.836200   25697 start.go:360] acquireMachinesLock for ha-475401-m03: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:57:51.836241   25697 start.go:364] duration metric: took 23.587µs to acquireMachinesLock for "ha-475401-m03"
	I0912 21:57:51.836263   25697 start.go:93] Provisioning new machine with config: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:57:51.836398   25697 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0912 21:57:51.838525   25697 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 21:57:51.838626   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:57:51.838662   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:57:51.853763   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I0912 21:57:51.854148   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:57:51.854771   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:57:51.854800   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:57:51.855192   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:57:51.855420   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:57:51.855603   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:57:51.855816   25697 start.go:159] libmachine.API.Create for "ha-475401" (driver="kvm2")
	I0912 21:57:51.855843   25697 client.go:168] LocalClient.Create starting
	I0912 21:57:51.855869   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 21:57:51.855906   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:57:51.855922   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:57:51.855965   25697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 21:57:51.855984   25697 main.go:141] libmachine: Decoding PEM data...
	I0912 21:57:51.855995   25697 main.go:141] libmachine: Parsing certificate...
	I0912 21:57:51.856009   25697 main.go:141] libmachine: Running pre-create checks...
	I0912 21:57:51.856014   25697 main.go:141] libmachine: (ha-475401-m03) Calling .PreCreateCheck
	I0912 21:57:51.856186   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:57:51.856600   25697 main.go:141] libmachine: Creating machine...
	I0912 21:57:51.856627   25697 main.go:141] libmachine: (ha-475401-m03) Calling .Create
	I0912 21:57:51.856771   25697 main.go:141] libmachine: (ha-475401-m03) Creating KVM machine...
	I0912 21:57:51.858042   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found existing default KVM network
	I0912 21:57:51.858204   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found existing private KVM network mk-ha-475401
	I0912 21:57:51.858336   25697 main.go:141] libmachine: (ha-475401-m03) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 ...
	I0912 21:57:51.858361   25697 main.go:141] libmachine: (ha-475401-m03) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:57:51.858418   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:51.858325   26470 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:57:51.858497   25697 main.go:141] libmachine: (ha-475401-m03) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 21:57:52.089539   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.089395   26470 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa...
	I0912 21:57:52.277087   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.276977   26470 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/ha-475401-m03.rawdisk...
	I0912 21:57:52.277109   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Writing magic tar header
	I0912 21:57:52.277119   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Writing SSH key tar header
	I0912 21:57:52.277127   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:52.277104   26470 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 ...
	I0912 21:57:52.277208   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03
	I0912 21:57:52.277266   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03 (perms=drwx------)
	I0912 21:57:52.277290   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 21:57:52.277306   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 21:57:52.277324   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 21:57:52.277333   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 21:57:52.277343   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:57:52.277359   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 21:57:52.277370   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 21:57:52.277383   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home/jenkins
	I0912 21:57:52.277395   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Checking permissions on dir: /home
	I0912 21:57:52.277410   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Skipping /home - not owner
	I0912 21:57:52.277427   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 21:57:52.277441   25697 main.go:141] libmachine: (ha-475401-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 21:57:52.277452   25697 main.go:141] libmachine: (ha-475401-m03) Creating domain...
	I0912 21:57:52.278379   25697 main.go:141] libmachine: (ha-475401-m03) define libvirt domain using xml: 
	I0912 21:57:52.278401   25697 main.go:141] libmachine: (ha-475401-m03) <domain type='kvm'>
	I0912 21:57:52.278410   25697 main.go:141] libmachine: (ha-475401-m03)   <name>ha-475401-m03</name>
	I0912 21:57:52.278427   25697 main.go:141] libmachine: (ha-475401-m03)   <memory unit='MiB'>2200</memory>
	I0912 21:57:52.278440   25697 main.go:141] libmachine: (ha-475401-m03)   <vcpu>2</vcpu>
	I0912 21:57:52.278452   25697 main.go:141] libmachine: (ha-475401-m03)   <features>
	I0912 21:57:52.278466   25697 main.go:141] libmachine: (ha-475401-m03)     <acpi/>
	I0912 21:57:52.278475   25697 main.go:141] libmachine: (ha-475401-m03)     <apic/>
	I0912 21:57:52.278481   25697 main.go:141] libmachine: (ha-475401-m03)     <pae/>
	I0912 21:57:52.278488   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.278494   25697 main.go:141] libmachine: (ha-475401-m03)   </features>
	I0912 21:57:52.278506   25697 main.go:141] libmachine: (ha-475401-m03)   <cpu mode='host-passthrough'>
	I0912 21:57:52.278535   25697 main.go:141] libmachine: (ha-475401-m03)   
	I0912 21:57:52.278555   25697 main.go:141] libmachine: (ha-475401-m03)   </cpu>
	I0912 21:57:52.278573   25697 main.go:141] libmachine: (ha-475401-m03)   <os>
	I0912 21:57:52.278585   25697 main.go:141] libmachine: (ha-475401-m03)     <type>hvm</type>
	I0912 21:57:52.278599   25697 main.go:141] libmachine: (ha-475401-m03)     <boot dev='cdrom'/>
	I0912 21:57:52.278610   25697 main.go:141] libmachine: (ha-475401-m03)     <boot dev='hd'/>
	I0912 21:57:52.278623   25697 main.go:141] libmachine: (ha-475401-m03)     <bootmenu enable='no'/>
	I0912 21:57:52.278637   25697 main.go:141] libmachine: (ha-475401-m03)   </os>
	I0912 21:57:52.278654   25697 main.go:141] libmachine: (ha-475401-m03)   <devices>
	I0912 21:57:52.278665   25697 main.go:141] libmachine: (ha-475401-m03)     <disk type='file' device='cdrom'>
	I0912 21:57:52.278679   25697 main.go:141] libmachine: (ha-475401-m03)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/boot2docker.iso'/>
	I0912 21:57:52.278692   25697 main.go:141] libmachine: (ha-475401-m03)       <target dev='hdc' bus='scsi'/>
	I0912 21:57:52.278706   25697 main.go:141] libmachine: (ha-475401-m03)       <readonly/>
	I0912 21:57:52.278721   25697 main.go:141] libmachine: (ha-475401-m03)     </disk>
	I0912 21:57:52.278735   25697 main.go:141] libmachine: (ha-475401-m03)     <disk type='file' device='disk'>
	I0912 21:57:52.278748   25697 main.go:141] libmachine: (ha-475401-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 21:57:52.278765   25697 main.go:141] libmachine: (ha-475401-m03)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/ha-475401-m03.rawdisk'/>
	I0912 21:57:52.278776   25697 main.go:141] libmachine: (ha-475401-m03)       <target dev='hda' bus='virtio'/>
	I0912 21:57:52.278788   25697 main.go:141] libmachine: (ha-475401-m03)     </disk>
	I0912 21:57:52.278803   25697 main.go:141] libmachine: (ha-475401-m03)     <interface type='network'>
	I0912 21:57:52.278824   25697 main.go:141] libmachine: (ha-475401-m03)       <source network='mk-ha-475401'/>
	I0912 21:57:52.278834   25697 main.go:141] libmachine: (ha-475401-m03)       <model type='virtio'/>
	I0912 21:57:52.278847   25697 main.go:141] libmachine: (ha-475401-m03)     </interface>
	I0912 21:57:52.278859   25697 main.go:141] libmachine: (ha-475401-m03)     <interface type='network'>
	I0912 21:57:52.278891   25697 main.go:141] libmachine: (ha-475401-m03)       <source network='default'/>
	I0912 21:57:52.278913   25697 main.go:141] libmachine: (ha-475401-m03)       <model type='virtio'/>
	I0912 21:57:52.278927   25697 main.go:141] libmachine: (ha-475401-m03)     </interface>
	I0912 21:57:52.278937   25697 main.go:141] libmachine: (ha-475401-m03)     <serial type='pty'>
	I0912 21:57:52.278948   25697 main.go:141] libmachine: (ha-475401-m03)       <target port='0'/>
	I0912 21:57:52.278958   25697 main.go:141] libmachine: (ha-475401-m03)     </serial>
	I0912 21:57:52.278967   25697 main.go:141] libmachine: (ha-475401-m03)     <console type='pty'>
	I0912 21:57:52.278979   25697 main.go:141] libmachine: (ha-475401-m03)       <target type='serial' port='0'/>
	I0912 21:57:52.279009   25697 main.go:141] libmachine: (ha-475401-m03)     </console>
	I0912 21:57:52.279030   25697 main.go:141] libmachine: (ha-475401-m03)     <rng model='virtio'>
	I0912 21:57:52.279047   25697 main.go:141] libmachine: (ha-475401-m03)       <backend model='random'>/dev/random</backend>
	I0912 21:57:52.279062   25697 main.go:141] libmachine: (ha-475401-m03)     </rng>
	I0912 21:57:52.279072   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.279082   25697 main.go:141] libmachine: (ha-475401-m03)     
	I0912 21:57:52.279094   25697 main.go:141] libmachine: (ha-475401-m03)   </devices>
	I0912 21:57:52.279104   25697 main.go:141] libmachine: (ha-475401-m03) </domain>
	I0912 21:57:52.279117   25697 main.go:141] libmachine: (ha-475401-m03) 
	I0912 21:57:52.287182   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:ae:8e:80 in network default
	I0912 21:57:52.287812   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring networks are active...
	I0912 21:57:52.287833   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:52.288627   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring network default is active
	I0912 21:57:52.289015   25697 main.go:141] libmachine: (ha-475401-m03) Ensuring network mk-ha-475401 is active
	I0912 21:57:52.289406   25697 main.go:141] libmachine: (ha-475401-m03) Getting domain xml...
	I0912 21:57:52.290192   25697 main.go:141] libmachine: (ha-475401-m03) Creating domain...
	I0912 21:57:53.523717   25697 main.go:141] libmachine: (ha-475401-m03) Waiting to get IP...
	I0912 21:57:53.524447   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:53.524851   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:53.524880   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:53.524829   26470 retry.go:31] will retry after 211.066146ms: waiting for machine to come up
	I0912 21:57:53.737191   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:53.737821   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:53.737850   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:53.737780   26470 retry.go:31] will retry after 360.564631ms: waiting for machine to come up
	I0912 21:57:54.100437   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.100792   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.100819   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.100749   26470 retry.go:31] will retry after 315.401499ms: waiting for machine to come up
	I0912 21:57:54.417313   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.417784   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.417816   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.417729   26470 retry.go:31] will retry after 561.902073ms: waiting for machine to come up
	I0912 21:57:54.981430   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:54.981899   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:54.981926   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:54.981879   26470 retry.go:31] will retry after 546.742528ms: waiting for machine to come up
	I0912 21:57:55.530751   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:55.531432   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:55.531470   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:55.531370   26470 retry.go:31] will retry after 939.461689ms: waiting for machine to come up
	I0912 21:57:56.472480   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:56.472969   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:56.472991   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:56.472923   26470 retry.go:31] will retry after 1.083765874s: waiting for machine to come up
	I0912 21:57:57.557895   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:57.558280   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:57.558304   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:57.558229   26470 retry.go:31] will retry after 1.425560523s: waiting for machine to come up
	I0912 21:57:58.985681   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:57:58.986215   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:57:58.986250   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:57:58.986177   26470 retry.go:31] will retry after 1.198470508s: waiting for machine to come up
	I0912 21:58:00.186460   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:00.186938   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:00.186961   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:00.186891   26470 retry.go:31] will retry after 1.42291773s: waiting for machine to come up
	I0912 21:58:01.611174   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:01.611610   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:01.611640   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:01.611558   26470 retry.go:31] will retry after 2.337610423s: waiting for machine to come up
	I0912 21:58:03.950802   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:03.951256   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:03.951316   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:03.951238   26470 retry.go:31] will retry after 3.426956904s: waiting for machine to come up
	I0912 21:58:07.379354   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:07.379817   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:07.379845   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:07.379772   26470 retry.go:31] will retry after 3.544851931s: waiting for machine to come up
	I0912 21:58:10.926683   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:10.927197   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find current IP address of domain ha-475401-m03 in network mk-ha-475401
	I0912 21:58:10.927220   25697 main.go:141] libmachine: (ha-475401-m03) DBG | I0912 21:58:10.927155   26470 retry.go:31] will retry after 4.917848564s: waiting for machine to come up
	I0912 21:58:15.846630   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.847012   25697 main.go:141] libmachine: (ha-475401-m03) Found IP for machine: 192.168.39.113
	I0912 21:58:15.847031   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has current primary IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.847037   25697 main.go:141] libmachine: (ha-475401-m03) Reserving static IP address...
	I0912 21:58:15.847432   25697 main.go:141] libmachine: (ha-475401-m03) DBG | unable to find host DHCP lease matching {name: "ha-475401-m03", mac: "52:54:00:21:aa:da", ip: "192.168.39.113"} in network mk-ha-475401
	I0912 21:58:15.924112   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Getting to WaitForSSH function...
	I0912 21:58:15.924145   25697 main.go:141] libmachine: (ha-475401-m03) Reserved static IP address: 192.168.39.113
	I0912 21:58:15.924157   25697 main.go:141] libmachine: (ha-475401-m03) Waiting for SSH to be available...
	I0912 21:58:15.927256   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.927739   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:15.927769   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:15.927945   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using SSH client type: external
	I0912 21:58:15.927977   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa (-rw-------)
	I0912 21:58:15.928007   25697 main.go:141] libmachine: (ha-475401-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 21:58:15.928021   25697 main.go:141] libmachine: (ha-475401-m03) DBG | About to run SSH command:
	I0912 21:58:15.928034   25697 main.go:141] libmachine: (ha-475401-m03) DBG | exit 0
	I0912 21:58:16.054077   25697 main.go:141] libmachine: (ha-475401-m03) DBG | SSH cmd err, output: <nil>: 
	I0912 21:58:16.054379   25697 main.go:141] libmachine: (ha-475401-m03) KVM machine creation complete!
	I0912 21:58:16.054692   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:58:16.055215   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:16.055409   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:16.055558   25697 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 21:58:16.055574   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 21:58:16.056828   25697 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 21:58:16.056849   25697 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 21:58:16.056858   25697 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 21:58:16.056924   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.058994   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.059438   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.059464   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.059632   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.059837   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.060050   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.060226   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.060439   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.060662   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.060675   25697 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 21:58:16.164954   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:58:16.164978   25697 main.go:141] libmachine: Detecting the provisioner...
	I0912 21:58:16.164989   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.168451   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.168868   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.168972   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.169138   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.169365   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.169539   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.169766   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.169947   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.170192   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.170213   25697 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 21:58:16.278282   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 21:58:16.278355   25697 main.go:141] libmachine: found compatible host: buildroot
	I0912 21:58:16.278363   25697 main.go:141] libmachine: Provisioning with buildroot...
	I0912 21:58:16.278375   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.278665   25697 buildroot.go:166] provisioning hostname "ha-475401-m03"
	I0912 21:58:16.278691   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.278907   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.281861   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.282229   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.282257   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.282442   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.282649   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.282806   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.282957   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.283131   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.283286   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.283300   25697 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401-m03 && echo "ha-475401-m03" | sudo tee /etc/hostname
	I0912 21:58:16.401183   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401-m03
	
	I0912 21:58:16.401213   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.404093   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.404465   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.404492   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.404761   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.404983   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.405145   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.405321   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.405500   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.405723   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.405750   25697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:58:16.518333   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:58:16.518369   25697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 21:58:16.518388   25697 buildroot.go:174] setting up certificates
	I0912 21:58:16.518399   25697 provision.go:84] configureAuth start
	I0912 21:58:16.518410   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetMachineName
	I0912 21:58:16.518683   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:16.521322   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.521671   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.521721   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.521858   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.524548   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.524936   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.524959   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.525079   25697 provision.go:143] copyHostCerts
	I0912 21:58:16.525109   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:58:16.525147   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 21:58:16.525157   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 21:58:16.525244   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 21:58:16.525336   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:58:16.525364   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 21:58:16.525375   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 21:58:16.525413   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 21:58:16.525474   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:58:16.525499   25697 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 21:58:16.525511   25697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 21:58:16.525542   25697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 21:58:16.525604   25697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401-m03 san=[127.0.0.1 192.168.39.113 ha-475401-m03 localhost minikube]
	I0912 21:58:16.670619   25697 provision.go:177] copyRemoteCerts
	I0912 21:58:16.670682   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:58:16.670708   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.673631   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.673988   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.674015   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.674220   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.674409   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.674603   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.674740   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:16.756476   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:58:16.756559   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:58:16.782422   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:58:16.782506   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 21:58:16.806050   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:58:16.806128   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:58:16.829300   25697 provision.go:87] duration metric: took 310.887198ms to configureAuth
	I0912 21:58:16.829334   25697 buildroot.go:189] setting minikube options for container-runtime
	I0912 21:58:16.829561   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:16.829649   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:16.832440   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.832782   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:16.832812   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:16.832974   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:16.833170   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.833335   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:16.833465   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:16.833695   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:16.833872   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:16.833892   25697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:58:17.065353   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:58:17.065383   25697 main.go:141] libmachine: Checking connection to Docker...
	I0912 21:58:17.065393   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetURL
	I0912 21:58:17.066775   25697 main.go:141] libmachine: (ha-475401-m03) DBG | Using libvirt version 6000000
	I0912 21:58:17.069139   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.069522   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.069553   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.069803   25697 main.go:141] libmachine: Docker is up and running!
	I0912 21:58:17.069820   25697 main.go:141] libmachine: Reticulating splines...
	I0912 21:58:17.069828   25697 client.go:171] duration metric: took 25.213978015s to LocalClient.Create
	I0912 21:58:17.069850   25697 start.go:167] duration metric: took 25.214034971s to libmachine.API.Create "ha-475401"
	I0912 21:58:17.069856   25697 start.go:293] postStartSetup for "ha-475401-m03" (driver="kvm2")
	I0912 21:58:17.069867   25697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:58:17.069895   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.070147   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:58:17.070176   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.072998   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.073456   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.073487   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.073701   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.073888   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.074057   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.074312   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.156708   25697 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:58:17.160870   25697 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 21:58:17.160898   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 21:58:17.160963   25697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 21:58:17.161063   25697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 21:58:17.161073   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 21:58:17.161161   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:58:17.171742   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:58:17.195821   25697 start.go:296] duration metric: took 125.954434ms for postStartSetup
	I0912 21:58:17.195873   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetConfigRaw
	I0912 21:58:17.196500   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:17.199379   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.199796   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.199825   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.200060   25697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 21:58:17.200266   25697 start.go:128] duration metric: took 25.363858634s to createHost
	I0912 21:58:17.200287   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.202673   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.203105   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.203133   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.203339   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.203536   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.203738   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.203873   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.204003   25697 main.go:141] libmachine: Using SSH client type: native
	I0912 21:58:17.204198   25697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0912 21:58:17.204209   25697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 21:58:17.310187   25697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178297.287216760
	
	I0912 21:58:17.310209   25697 fix.go:216] guest clock: 1726178297.287216760
	I0912 21:58:17.310218   25697 fix.go:229] Guest: 2024-09-12 21:58:17.28721676 +0000 UTC Remote: 2024-09-12 21:58:17.200277487 +0000 UTC m=+141.807292987 (delta=86.939273ms)
	I0912 21:58:17.310239   25697 fix.go:200] guest clock delta is within tolerance: 86.939273ms
	I0912 21:58:17.310245   25697 start.go:83] releasing machines lock for "ha-475401-m03", held for 25.473992567s
	I0912 21:58:17.310263   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.310511   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:17.313579   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.313972   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.313999   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.316436   25697 out.go:177] * Found network options:
	I0912 21:58:17.317820   25697 out.go:177]   - NO_PROXY=192.168.39.203,192.168.39.222
	W0912 21:58:17.319126   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	W0912 21:58:17.319152   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:58:17.319167   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.319737   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.319950   25697 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 21:58:17.320055   25697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:58:17.320093   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	W0912 21:58:17.320126   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	W0912 21:58:17.320157   25697 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 21:58:17.320214   25697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:58:17.320229   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 21:58:17.323096   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323200   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323521   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.323554   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323666   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:17.323689   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:17.323693   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.323884   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 21:58:17.323902   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.324020   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 21:58:17.324163   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.324202   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 21:58:17.324315   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.324392   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 21:58:17.556817   25697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 21:58:17.563194   25697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:58:17.563255   25697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:58:17.578490   25697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:58:17.578526   25697 start.go:495] detecting cgroup driver to use...
	I0912 21:58:17.578592   25697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:58:17.594646   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:58:17.609388   25697 docker.go:217] disabling cri-docker service (if available) ...
	I0912 21:58:17.609463   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:58:17.623506   25697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:58:17.638009   25697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:58:17.757171   25697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:58:17.919529   25697 docker.go:233] disabling docker service ...
	I0912 21:58:17.919597   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:58:17.936247   25697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:58:17.949251   25697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:58:18.080764   25697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:58:18.226645   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:58:18.240015   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:58:18.257720   25697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 21:58:18.257771   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.267777   25697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:58:18.267845   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.277904   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.287961   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.297816   25697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:58:18.307898   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.317481   25697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.334095   25697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:58:18.344337   25697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:58:18.353785   25697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 21:58:18.353844   25697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 21:58:18.366829   25697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:58:18.375790   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:18.502382   25697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:58:18.594408   25697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:58:18.594491   25697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:58:18.599810   25697 start.go:563] Will wait 60s for crictl version
	I0912 21:58:18.599875   25697 ssh_runner.go:195] Run: which crictl
	I0912 21:58:18.603628   25697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:58:18.642676   25697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 21:58:18.642748   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:58:18.671226   25697 ssh_runner.go:195] Run: crio --version
	I0912 21:58:18.705784   25697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 21:58:18.707115   25697 out.go:177]   - env NO_PROXY=192.168.39.203
	I0912 21:58:18.708351   25697 out.go:177]   - env NO_PROXY=192.168.39.203,192.168.39.222
	I0912 21:58:18.709381   25697 main.go:141] libmachine: (ha-475401-m03) Calling .GetIP
	I0912 21:58:18.712070   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:18.712384   25697 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 21:58:18.712411   25697 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 21:58:18.712589   25697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 21:58:18.716506   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:58:18.727915   25697 mustload.go:65] Loading cluster: ha-475401
	I0912 21:58:18.728133   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:18.728389   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:18.728424   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:18.742999   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0912 21:58:18.743408   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:18.743901   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:18.743924   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:18.744231   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:18.744428   25697 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 21:58:18.746070   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:58:18.746392   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:18.746428   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:18.762525   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0912 21:58:18.762942   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:18.763434   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:18.763460   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:18.763734   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:18.763919   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:58:18.764061   25697 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.113
	I0912 21:58:18.764070   25697 certs.go:194] generating shared ca certs ...
	I0912 21:58:18.764088   25697 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.764216   25697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 21:58:18.764271   25697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 21:58:18.764284   25697 certs.go:256] generating profile certs ...
	I0912 21:58:18.764388   25697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 21:58:18.764419   25697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c
	I0912 21:58:18.764439   25697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.113 192.168.39.254]
	I0912 21:58:18.953177   25697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c ...
	I0912 21:58:18.953215   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c: {Name:mkf24e0813415b85ef4632a7cc37b1377b0685cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.953428   25697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c ...
	I0912 21:58:18.953449   25697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c: {Name:mk58abab0883e8bb1ef151ca20853139ede46b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:58:18.953569   25697 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.0c18783c -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 21:58:18.953774   25697 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.0c18783c -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 21:58:18.953910   25697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 21:58:18.953926   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:58:18.953938   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:58:18.953951   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:58:18.953964   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:58:18.953979   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:58:18.953994   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:58:18.954012   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:58:18.954029   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:58:18.954094   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 21:58:18.954128   25697 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 21:58:18.954138   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:58:18.954159   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:58:18.954183   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:58:18.954204   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 21:58:18.954242   25697 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 21:58:18.954270   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 21:58:18.954291   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:18.954305   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 21:58:18.954347   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:58:18.957523   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:18.957956   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:58:18.957979   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:18.958188   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:58:18.958407   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:58:18.958570   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:58:18.958710   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:58:19.033980   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0912 21:58:19.038605   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0912 21:58:19.049584   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0912 21:58:19.054642   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0912 21:58:19.064670   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0912 21:58:19.069722   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0912 21:58:19.080717   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0912 21:58:19.084846   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0912 21:58:19.094482   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0912 21:58:19.098548   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0912 21:58:19.108676   25697 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0912 21:58:19.112618   25697 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0912 21:58:19.123387   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:58:19.147272   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:58:19.171949   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:58:19.194976   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 21:58:19.220495   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0912 21:58:19.244949   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:58:19.271742   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:58:19.294753   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:58:19.318268   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 21:58:19.340684   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:58:19.365814   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 21:58:19.389740   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0912 21:58:19.405480   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0912 21:58:19.421765   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0912 21:58:19.437326   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0912 21:58:19.453160   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0912 21:58:19.470517   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0912 21:58:19.486080   25697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0912 21:58:19.501798   25697 ssh_runner.go:195] Run: openssl version
	I0912 21:58:19.507435   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 21:58:19.517604   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.521723   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.521777   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 21:58:19.526998   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:58:19.537203   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:58:19.547246   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.551547   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.551607   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:58:19.557700   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:58:19.568443   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 21:58:19.578990   25697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.583232   25697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.583288   25697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 21:58:19.589208   25697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 21:58:19.602048   25697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:58:19.606071   25697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:58:19.606135   25697 kubeadm.go:934] updating node {m03 192.168.39.113 8443 v1.31.1 crio true true} ...
	I0912 21:58:19.606216   25697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:58:19.606244   25697 kube-vip.go:115] generating kube-vip config ...
	I0912 21:58:19.606277   25697 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 21:58:19.622619   25697 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 21:58:19.622681   25697 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 21:58:19.622729   25697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:58:19.632965   25697 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0912 21:58:19.633019   25697 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0912 21:58:19.642792   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0912 21:58:19.642844   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:58:19.642797   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0912 21:58:19.642797   25697 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0912 21:58:19.642914   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:58:19.642924   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:58:19.642998   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:58:19.643002   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:58:19.656783   25697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:58:19.656810   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0912 21:58:19.656841   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0912 21:58:19.656883   25697 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:58:19.656887   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0912 21:58:19.656909   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0912 21:58:19.677837   25697 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0912 21:58:19.677879   25697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0912 21:58:20.516897   25697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0912 21:58:20.527105   25697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0912 21:58:20.543950   25697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:58:20.560138   25697 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 21:58:20.576473   25697 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 21:58:20.580703   25697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:58:20.594636   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:20.711822   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:58:20.728173   25697 host.go:66] Checking if "ha-475401" exists ...
	I0912 21:58:20.728605   25697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:58:20.728646   25697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:58:20.744851   25697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0912 21:58:20.745236   25697 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:58:20.745710   25697 main.go:141] libmachine: Using API Version  1
	I0912 21:58:20.745733   25697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:58:20.746032   25697 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:58:20.746269   25697 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 21:58:20.746538   25697 start.go:317] joinCluster: &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:58:20.746701   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0912 21:58:20.746722   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 21:58:20.750060   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:20.750622   25697 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 21:58:20.750652   25697 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 21:58:20.750829   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 21:58:20.751028   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 21:58:20.751180   25697 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 21:58:20.751376   25697 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 21:58:20.916489   25697 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:58:20.916544   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0xcd8r.97jbzfa11jxyn92v --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m03 --control-plane --apiserver-advertise-address=192.168.39.113 --apiserver-bind-port=8443"
	I0912 21:58:43.506086   25697 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0xcd8r.97jbzfa11jxyn92v --discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-475401-m03 --control-plane --apiserver-advertise-address=192.168.39.113 --apiserver-bind-port=8443": (22.589509925s)
	I0912 21:58:43.506132   25697 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0912 21:58:44.092103   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-475401-m03 minikube.k8s.io/updated_at=2024_09_12T21_58_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=ha-475401 minikube.k8s.io/primary=false
	I0912 21:58:44.209844   25697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-475401-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0912 21:58:44.316135   25697 start.go:319] duration metric: took 23.569593336s to joinCluster
	I0912 21:58:44.316216   25697 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:58:44.316520   25697 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:58:44.317744   25697 out.go:177] * Verifying Kubernetes components...
	I0912 21:58:44.319169   25697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:58:44.634041   25697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:58:44.674413   25697 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:58:44.674780   25697 kapi.go:59] client config for ha-475401: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.crt", KeyFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key", CAFile:"/home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0912 21:58:44.674888   25697 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.203:8443
	I0912 21:58:44.675253   25697 node_ready.go:35] waiting up to 6m0s for node "ha-475401-m03" to be "Ready" ...
	I0912 21:58:44.675376   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:44.675393   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:44.675404   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:44.675417   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:44.679106   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:45.176235   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:45.176261   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:45.176274   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:45.176278   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:45.184905   25697 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0912 21:58:45.675803   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:45.675830   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:45.675840   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:45.675846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:45.679887   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:46.175582   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:46.175607   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:46.175615   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:46.175619   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:46.179331   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:46.676220   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:46.676241   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:46.676249   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:46.676254   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:46.679776   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:46.680342   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:47.176086   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:47.176112   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:47.176124   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:47.176131   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:47.179842   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:47.675675   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:47.675697   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:47.675704   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:47.675707   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:47.679110   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:48.175914   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:48.175941   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:48.175952   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:48.175958   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:48.179653   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:48.675571   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:48.675598   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:48.675606   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:48.675613   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:48.678914   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:49.175531   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:49.175560   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:49.175570   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:49.175576   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:49.179597   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:49.180567   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:49.675843   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:49.675868   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:49.675879   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:49.675883   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:49.679316   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:50.175514   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:50.175535   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:50.175547   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:50.175553   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:50.179205   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:50.676071   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:50.676100   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:50.676111   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:50.676118   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:50.679505   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.176103   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:51.176135   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:51.176143   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:51.176147   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:51.179884   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.676344   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:51.676382   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:51.676390   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:51.676393   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:51.680019   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:51.680654   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:52.176378   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:52.176406   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:52.176414   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:52.176419   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:52.180125   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:52.676247   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:52.676270   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:52.676279   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:52.676282   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:52.679812   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:53.176107   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:53.176131   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:53.176139   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:53.176143   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:53.179717   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:53.675836   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:53.675858   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:53.675869   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:53.675873   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:53.679242   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:54.175768   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:54.175801   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:54.175809   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:54.175815   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:54.179360   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:54.179907   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:54.676422   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:54.676445   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:54.676454   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:54.676457   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:54.680731   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:55.175735   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:55.175757   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:55.175765   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:55.175770   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:55.179554   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:55.676326   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:55.676349   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:55.676357   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:55.676361   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:55.680385   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:56.175676   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:56.175700   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:56.175708   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:56.175711   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:56.179627   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:56.180300   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:56.675674   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:56.675697   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:56.675706   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:56.675710   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:56.679406   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:57.176164   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:57.176186   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:57.176195   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:57.176198   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:57.180244   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:58:57.676163   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:57.676187   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:57.676194   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:57.676198   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:57.680168   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:58.176214   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:58.176244   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:58.176252   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:58.176255   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:58.179808   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:58.180370   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:58:58.675737   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:58.675760   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:58.675769   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:58.675777   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:58.679027   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:59.175818   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:59.175842   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:59.175853   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:59.175858   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:59.179845   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:58:59.675886   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:58:59.675910   25697 round_trippers.go:469] Request Headers:
	I0912 21:58:59.675918   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:58:59.675922   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:58:59.679576   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:00.176363   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:00.176387   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:00.176396   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:00.176400   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:00.180070   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:00.180791   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:59:00.676010   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:00.676033   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:00.676041   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:00.676045   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:00.679249   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:01.175824   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:01.175850   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:01.175858   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:01.175863   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:01.179430   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:01.676207   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:01.676230   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:01.676236   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:01.676240   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:01.680352   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:02.176034   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:02.176068   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.176079   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.176084   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.182766   25697 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0912 21:59:02.183891   25697 node_ready.go:53] node "ha-475401-m03" has status "Ready":"False"
	I0912 21:59:02.676131   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:02.676155   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.676167   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.676172   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.680118   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.680837   25697 node_ready.go:49] node "ha-475401-m03" has status "Ready":"True"
	I0912 21:59:02.680861   25697 node_ready.go:38] duration metric: took 18.005582322s for node "ha-475401-m03" to be "Ready" ...
	I0912 21:59:02.680876   25697 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:59:02.680956   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:02.680969   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.680980   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.680989   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.686922   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:59:02.694423   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.694507   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pzsv8
	I0912 21:59:02.694516   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.694523   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.694526   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.697802   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.698501   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.698516   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.698523   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.698526   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.701276   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.701946   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.701969   25697 pod_ready.go:82] duration metric: took 7.516721ms for pod "coredns-7c65d6cfc9-pzsv8" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.701982   25697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.702048   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xhdj7
	I0912 21:59:02.702059   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.702069   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.702077   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.704853   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.705503   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.705520   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.705527   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.705530   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.707979   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.708387   25697 pod_ready.go:93] pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.708403   25697 pod_ready.go:82] duration metric: took 6.41346ms for pod "coredns-7c65d6cfc9-xhdj7" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.708414   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.708468   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401
	I0912 21:59:02.708477   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.708487   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.708496   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.711155   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.711915   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:02.711934   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.711944   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.711951   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.715161   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.715731   25697 pod_ready.go:93] pod "etcd-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.715752   25697 pod_ready.go:82] duration metric: took 7.329765ms for pod "etcd-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.715765   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.715842   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m02
	I0912 21:59:02.715854   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.715864   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.715874   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.718893   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:02.719400   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:02.719415   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.719422   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.719426   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.722428   25697 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 21:59:02.722853   25697 pod_ready.go:93] pod "etcd-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:02.722869   25697 pod_ready.go:82] duration metric: took 7.097106ms for pod "etcd-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.722879   25697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:02.876254   25697 request.go:632] Waited for 153.314803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m03
	I0912 21:59:02.876341   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/etcd-ha-475401-m03
	I0912 21:59:02.876346   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:02.876354   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:02.876361   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:02.883992   25697 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 21:59:03.076941   25697 request.go:632] Waited for 192.395637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:03.077030   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:03.077043   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.077052   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.077060   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.081099   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:03.081567   25697 pod_ready.go:93] pod "etcd-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.081597   25697 pod_ready.go:82] duration metric: took 358.710237ms for pod "etcd-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.081630   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.276994   25697 request.go:632] Waited for 195.296905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:59:03.277081   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401
	I0912 21:59:03.277091   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.277098   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.277103   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.280354   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.476350   25697 request.go:632] Waited for 195.302508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:03.476410   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:03.476417   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.476424   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.476432   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.480094   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.480496   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.480516   25697 pod_ready.go:82] duration metric: took 398.879405ms for pod "kube-apiserver-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.480526   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.676749   25697 request.go:632] Waited for 196.161829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:59:03.676829   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m02
	I0912 21:59:03.676835   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.676842   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.676846   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.680709   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.876958   25697 request.go:632] Waited for 195.535486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:03.877012   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:03.877023   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:03.877035   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:03.877043   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:03.880284   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:03.880989   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:03.881029   25697 pod_ready.go:82] duration metric: took 400.490543ms for pod "kube-apiserver-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:03.881048   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.077000   25697 request.go:632] Waited for 195.868605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m03
	I0912 21:59:04.077079   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-475401-m03
	I0912 21:59:04.077088   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.077098   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.077103   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.080433   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.276584   25697 request.go:632] Waited for 195.431475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:04.276643   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:04.276649   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.276656   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.276660   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.280579   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.281147   25697 pod_ready.go:93] pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:04.281165   25697 pod_ready.go:82] duration metric: took 400.103498ms for pod "kube-apiserver-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.281175   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.476450   25697 request.go:632] Waited for 195.211156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:59:04.476537   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401
	I0912 21:59:04.476542   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.476552   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.476561   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.479975   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.677118   25697 request.go:632] Waited for 196.396416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:04.677188   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:04.677195   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.677210   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.677219   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.681081   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:04.681749   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:04.681769   25697 pod_ready.go:82] duration metric: took 400.585863ms for pod "kube-controller-manager-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.681779   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:04.876939   25697 request.go:632] Waited for 195.094177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:59:04.877029   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m02
	I0912 21:59:04.877036   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:04.877047   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:04.877052   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:04.881728   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.076794   25697 request.go:632] Waited for 194.366008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:05.076851   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:05.076858   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.076865   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.076868   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.080228   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.080921   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.080941   25697 pod_ready.go:82] duration metric: took 399.152206ms for pod "kube-controller-manager-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.080950   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.277142   25697 request.go:632] Waited for 196.109144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m03
	I0912 21:59:05.277204   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-475401-m03
	I0912 21:59:05.277211   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.277220   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.277227   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.280732   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.476710   25697 request.go:632] Waited for 195.280166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:05.476794   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:05.476807   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.476817   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.476822   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.480907   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.481324   25697 pod_ready.go:93] pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.481339   25697 pod_ready.go:82] duration metric: took 400.382916ms for pod "kube-controller-manager-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.481350   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.676872   25697 request.go:632] Waited for 195.440769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:59:05.676939   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bk97
	I0912 21:59:05.676944   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.676952   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.676957   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.680652   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:05.876730   25697 request.go:632] Waited for 195.460613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:05.876786   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:05.876792   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:05.876800   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:05.876805   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:05.881785   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:05.882467   25697 pod_ready.go:93] pod "kube-proxy-4bk97" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:05.882485   25697 pod_ready.go:82] duration metric: took 401.124997ms for pod "kube-proxy-4bk97" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:05.882494   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5f8z5" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.076701   25697 request.go:632] Waited for 194.127157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f8z5
	I0912 21:59:06.076754   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f8z5
	I0912 21:59:06.076760   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.076767   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.076773   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.080288   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.276558   25697 request.go:632] Waited for 195.363461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:06.276613   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:06.276619   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.276626   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.276629   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.280083   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.280881   25697 pod_ready.go:93] pod "kube-proxy-5f8z5" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:06.280898   25697 pod_ready.go:82] duration metric: took 398.398398ms for pod "kube-proxy-5f8z5" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.280911   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.476939   25697 request.go:632] Waited for 195.914135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:59:06.477007   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-proxy-68h98
	I0912 21:59:06.477054   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.477075   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.477082   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.484776   25697 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0912 21:59:06.677078   25697 request.go:632] Waited for 191.25254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:06.677159   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:06.677167   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.677174   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.677181   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.680468   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:06.681141   25697 pod_ready.go:93] pod "kube-proxy-68h98" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:06.681159   25697 pod_ready.go:82] duration metric: took 400.242392ms for pod "kube-proxy-68h98" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.681168   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:06.876743   25697 request.go:632] Waited for 195.498455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:59:06.876808   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401
	I0912 21:59:06.876815   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:06.876826   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:06.876832   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:06.880459   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.076364   25697 request.go:632] Waited for 195.346788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:07.076454   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401
	I0912 21:59:07.076467   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.076480   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.076493   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.080104   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.080763   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.080787   25697 pod_ready.go:82] duration metric: took 399.611316ms for pod "kube-scheduler-ha-475401" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.080802   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.276802   25697 request.go:632] Waited for 195.91086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:59:07.276867   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m02
	I0912 21:59:07.276872   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.276880   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.276884   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.280548   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.476526   25697 request.go:632] Waited for 195.363073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:07.476584   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m02
	I0912 21:59:07.476591   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.476600   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.476604   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.479767   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.480265   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.480281   25697 pod_ready.go:82] duration metric: took 399.471583ms for pod "kube-scheduler-ha-475401-m02" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.480291   25697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.676470   25697 request.go:632] Waited for 196.120749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m03
	I0912 21:59:07.676538   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-475401-m03
	I0912 21:59:07.676544   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.676551   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.676556   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.679917   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.877062   25697 request.go:632] Waited for 196.383558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:07.877130   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes/ha-475401-m03
	I0912 21:59:07.877138   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.877150   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.877159   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.880654   25697 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 21:59:07.881172   25697 pod_ready.go:93] pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace has status "Ready":"True"
	I0912 21:59:07.881190   25697 pod_ready.go:82] duration metric: took 400.893675ms for pod "kube-scheduler-ha-475401-m03" in "kube-system" namespace to be "Ready" ...
	I0912 21:59:07.881202   25697 pod_ready.go:39] duration metric: took 5.20031508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:59:07.881215   25697 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:59:07.881262   25697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:59:07.900785   25697 api_server.go:72] duration metric: took 23.584524322s to wait for apiserver process to appear ...
	I0912 21:59:07.900817   25697 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:59:07.900840   25697 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0912 21:59:07.907798   25697 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0912 21:59:07.907875   25697 round_trippers.go:463] GET https://192.168.39.203:8443/version
	I0912 21:59:07.907884   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:07.907896   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:07.907906   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:07.909010   25697 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 21:59:07.909071   25697 api_server.go:141] control plane version: v1.31.1
	I0912 21:59:07.909086   25697 api_server.go:131] duration metric: took 8.262894ms to wait for apiserver health ...
	I0912 21:59:07.909100   25697 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:59:08.076517   25697 request.go:632] Waited for 167.326131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.076589   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.076606   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.076618   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.076627   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.082348   25697 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 21:59:08.088554   25697 system_pods.go:59] 24 kube-system pods found
	I0912 21:59:08.088580   25697 system_pods.go:61] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:59:08.088585   25697 system_pods.go:61] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:59:08.088589   25697 system_pods.go:61] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:59:08.088592   25697 system_pods.go:61] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:59:08.088595   25697 system_pods.go:61] "etcd-ha-475401-m03" [8724e34b-d305-4597-bca2-c66fac3b4600] Running
	I0912 21:59:08.088598   25697 system_pods.go:61] "kindnet-bh5lg" [ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca] Running
	I0912 21:59:08.088601   25697 system_pods.go:61] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:59:08.088605   25697 system_pods.go:61] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:59:08.088607   25697 system_pods.go:61] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:59:08.088611   25697 system_pods.go:61] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:59:08.088613   25697 system_pods.go:61] "kube-apiserver-ha-475401-m03" [c5bb8141-1cf2-4c9d-9388-25ab86dcdb4f] Running
	I0912 21:59:08.088616   25697 system_pods.go:61] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:59:08.088619   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:59:08.088622   25697 system_pods.go:61] "kube-controller-manager-ha-475401-m03" [75509e84-31f0-4d4f-8fc9-17fa80060318] Running
	I0912 21:59:08.088625   25697 system_pods.go:61] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:59:08.088628   25697 system_pods.go:61] "kube-proxy-5f8z5" [cbd76149-2de8-4f4b-9b54-b71cc0c60cab] Running
	I0912 21:59:08.088631   25697 system_pods.go:61] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:59:08.088636   25697 system_pods.go:61] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:59:08.088641   25697 system_pods.go:61] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:59:08.088644   25697 system_pods.go:61] "kube-scheduler-ha-475401-m03" [e9d051b7-cba8-4054-b17b-5e4fb66e2690] Running
	I0912 21:59:08.088647   25697 system_pods.go:61] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:59:08.088652   25697 system_pods.go:61] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:59:08.088655   25697 system_pods.go:61] "kube-vip-ha-475401-m03" [21ade4a0-8d41-4938-a0cf-19d917b591de] Running
	I0912 21:59:08.088660   25697 system_pods.go:61] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:59:08.088666   25697 system_pods.go:74] duration metric: took 179.557191ms to wait for pod list to return data ...
	I0912 21:59:08.088676   25697 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:59:08.277093   25697 request.go:632] Waited for 188.347544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:59:08.277147   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:59:08.277152   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.277159   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.277164   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.281215   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:08.281325   25697 default_sa.go:45] found service account: "default"
	I0912 21:59:08.281337   25697 default_sa.go:55] duration metric: took 192.654062ms for default service account to be created ...
	I0912 21:59:08.281345   25697 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:59:08.476798   25697 request.go:632] Waited for 195.373202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.476849   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/namespaces/kube-system/pods
	I0912 21:59:08.476854   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.476861   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.476865   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.486585   25697 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0912 21:59:08.493343   25697 system_pods.go:86] 24 kube-system pods found
	I0912 21:59:08.493375   25697 system_pods.go:89] "coredns-7c65d6cfc9-pzsv8" [7acde6a5-dc08-4dda-89ef-07ed97df387e] Running
	I0912 21:59:08.493381   25697 system_pods.go:89] "coredns-7c65d6cfc9-xhdj7" [d964d6f0-d544-4cef-8151-08e5e1c76dce] Running
	I0912 21:59:08.493385   25697 system_pods.go:89] "etcd-ha-475401" [174b5dde-143c-4f15-abb4-2c8376d9c0aa] Running
	I0912 21:59:08.493389   25697 system_pods.go:89] "etcd-ha-475401-m02" [bac8cf55-1bf0-4696-9da2-3ca4c6bc9c54] Running
	I0912 21:59:08.493392   25697 system_pods.go:89] "etcd-ha-475401-m03" [8724e34b-d305-4597-bca2-c66fac3b4600] Running
	I0912 21:59:08.493395   25697 system_pods.go:89] "kindnet-bh5lg" [ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca] Running
	I0912 21:59:08.493399   25697 system_pods.go:89] "kindnet-cbfm5" [e0f3daaf-250f-4614-bd8d-61e8fe544c1a] Running
	I0912 21:59:08.493402   25697 system_pods.go:89] "kindnet-k4q6l" [6a445756-2595-4d49-8aea-719cb0aa312c] Running
	I0912 21:59:08.493405   25697 system_pods.go:89] "kube-apiserver-ha-475401" [afb6df04-142d-4026-b4fb-2067bac9613d] Running
	I0912 21:59:08.493409   25697 system_pods.go:89] "kube-apiserver-ha-475401-m02" [ff70254a-357a-47d3-9733-3cded316a2b1] Running
	I0912 21:59:08.493412   25697 system_pods.go:89] "kube-apiserver-ha-475401-m03" [c5bb8141-1cf2-4c9d-9388-25ab86dcdb4f] Running
	I0912 21:59:08.493416   25697 system_pods.go:89] "kube-controller-manager-ha-475401" [bf286c1d-42de-4eb9-b235-30581692256b] Running
	I0912 21:59:08.493420   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m02" [87d98823-b5aa-4c7e-835e-978465fec19d] Running
	I0912 21:59:08.493423   25697 system_pods.go:89] "kube-controller-manager-ha-475401-m03" [75509e84-31f0-4d4f-8fc9-17fa80060318] Running
	I0912 21:59:08.493426   25697 system_pods.go:89] "kube-proxy-4bk97" [a2af5486-4276-48a8-98ef-6fad7ae9976d] Running
	I0912 21:59:08.493429   25697 system_pods.go:89] "kube-proxy-5f8z5" [cbd76149-2de8-4f4b-9b54-b71cc0c60cab] Running
	I0912 21:59:08.493435   25697 system_pods.go:89] "kube-proxy-68h98" [f216ed62-cdc6-40e9-bb4d-e6962596eb3c] Running
	I0912 21:59:08.493440   25697 system_pods.go:89] "kube-scheduler-ha-475401" [3403b9e5-adb3-4028-aedd-1101d94a421c] Running
	I0912 21:59:08.493443   25697 system_pods.go:89] "kube-scheduler-ha-475401-m02" [fbe552c1-e8a7-4bb2-a1c9-c5d40f4ad77c] Running
	I0912 21:59:08.493446   25697 system_pods.go:89] "kube-scheduler-ha-475401-m03" [e9d051b7-cba8-4054-b17b-5e4fb66e2690] Running
	I0912 21:59:08.493449   25697 system_pods.go:89] "kube-vip-ha-475401" [775b4ded-905c-412e-9c92-5ce3ff148380] Running
	I0912 21:59:08.493452   25697 system_pods.go:89] "kube-vip-ha-475401-m02" [0f1626f2-f90c-4920-b726-b1d492c805d6] Running
	I0912 21:59:08.493454   25697 system_pods.go:89] "kube-vip-ha-475401-m03" [21ade4a0-8d41-4938-a0cf-19d917b591de] Running
	I0912 21:59:08.493457   25697 system_pods.go:89] "storage-provisioner" [7fc8738b-56e8-4024-afe7-b552c79dd3f2] Running
	I0912 21:59:08.493464   25697 system_pods.go:126] duration metric: took 212.113521ms to wait for k8s-apps to be running ...
	I0912 21:59:08.493473   25697 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:59:08.493523   25697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:59:08.511997   25697 system_svc.go:56] duration metric: took 18.515662ms WaitForService to wait for kubelet
	I0912 21:59:08.512026   25697 kubeadm.go:582] duration metric: took 24.195769965s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:59:08.512052   25697 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:59:08.676468   25697 request.go:632] Waited for 164.35084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.203:8443/api/v1/nodes
	I0912 21:59:08.676536   25697 round_trippers.go:463] GET https://192.168.39.203:8443/api/v1/nodes
	I0912 21:59:08.676557   25697 round_trippers.go:469] Request Headers:
	I0912 21:59:08.676572   25697 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 21:59:08.676579   25697 round_trippers.go:473]     Accept: application/json, */*
	I0912 21:59:08.680857   25697 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 21:59:08.682202   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682227   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682237   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682240   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682243   25697 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 21:59:08.682246   25697 node_conditions.go:123] node cpu capacity is 2
	I0912 21:59:08.682250   25697 node_conditions.go:105] duration metric: took 170.192806ms to run NodePressure ...
	I0912 21:59:08.682261   25697 start.go:241] waiting for startup goroutines ...
	I0912 21:59:08.682280   25697 start.go:255] writing updated cluster config ...
	I0912 21:59:08.682550   25697 ssh_runner.go:195] Run: rm -f paused
	I0912 21:59:08.733681   25697 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:59:08.736942   25697 out.go:177] * Done! kubectl is now configured to use "ha-475401" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.525462447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178630525430279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62f50a0f-62eb-4599-8a38-3a951d5efcb2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.526179233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9488202a-70e1-4e50-8dd3-d42fe99a5285 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.526311443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9488202a-70e1-4e50-8dd3-d42fe99a5285 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.526821162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9488202a-70e1-4e50-8dd3-d42fe99a5285 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.563607389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a108981-7a5b-4ee4-bdef-e2c1da7137b5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.563682223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a108981-7a5b-4ee4-bdef-e2c1da7137b5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.564638749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cc405c1-e401-4843-9252-91020617f359 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.565057000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178630565033948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc405c1-e401-4843-9252-91020617f359 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.565581598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e410955-e481-4e61-b8c8-9e939dc08cb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.565654594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e410955-e481-4e61-b8c8-9e939dc08cb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.565911971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e410955-e481-4e61-b8c8-9e939dc08cb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.612885728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5b87c58-38cb-4fdb-b484-7778383bf251 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.612975181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5b87c58-38cb-4fdb-b484-7778383bf251 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.614945991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b937612f-59ea-4506-80bb-167d2d97ebd7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.615602846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178630615575037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b937612f-59ea-4506-80bb-167d2d97ebd7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.616188904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=456776ce-6e42-42f2-b21c-23b63138c77c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.616261742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=456776ce-6e42-42f2-b21c-23b63138c77c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.616511475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=456776ce-6e42-42f2-b21c-23b63138c77c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.656367035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86122f7c-91db-4a09-b0c1-0a3b2d43ce2b name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.656445244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86122f7c-91db-4a09-b0c1-0a3b2d43ce2b name=/runtime.v1.RuntimeService/Version
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.657738026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e50f1a89-f92a-43b7-93ab-ddd5183a29ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.658406466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178630658377781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e50f1a89-f92a-43b7-93ab-ddd5183a29ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.658970359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4029943c-90c9-4491-a56e-134402b1687f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.659018997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4029943c-90c9-4491-a56e-134402b1687f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:03:50 ha-475401 crio[656]: time="2024-09-12 22:03:50.659309593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178353035924958,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214407187415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178214365699631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cb8597aada82577ac9a68667aa703860b73cd7a7d2608f2f1710afeea8755bc,PodSandboxId:66384e83c1a7ece3371a965ab3ba97a9715da38bb436ed7d556b4dfcb0e4c6fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726178213383885747,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17261782
01877218074,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178201594663883,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d65acd179a43f2673f87f9d146fe7e0cf6a8a26a4bf7c898a5ca3b30b2f939,PodSandboxId:b023c361d20d02f35081a9b9e5203352210f95fc28ab966cfc29bafeb1aaa513,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178192961279069,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 352a7403576a810ca909a82e8b665d77,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f,PodSandboxId:98ca9fd003ad441e2b5d9efc189c2704700ac511f3b30e63ae59bcbfb23c084c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178190341555582,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4,PodSandboxId:aa3f11d134c2cbeca4f824ca6bc6a108e48bfaed54aa4e31af088ec691cb4038,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178190304329774,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178190273726647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178190295080607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4029943c-90c9-4491-a56e-134402b1687f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	607e14e475ce3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   7fe4fd6a828e2       busybox-7dff88458-l2hdm
	9b36db608ba8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8b265e5bc9493       coredns-7c65d6cfc9-xhdj7
	f56ac218b5509       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2fdeb00439622       coredns-7c65d6cfc9-pzsv8
	7cb8597aada82       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   66384e83c1a7e       storage-provisioner
	38d31aa5dc410       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   ef4f45d37668b       kindnet-cbfm5
	0891cec467fda       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   d58e93f3f447d       kube-proxy-4bk97
	e9d65acd179a4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   b023c361d20d0       kube-vip-ha-475401
	df088d2d1a92a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   98ca9fd003ad4       kube-apiserver-ha-475401
	4cfa11556cf34       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   aa3f11d134c2c       kube-controller-manager-ha-475401
	17a4293d12cac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   17b7717a92942       kube-scheduler-ha-475401
	5008665ceb8c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   e980e3980d971       etcd-ha-475401
	
	
	==> coredns [9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4] <==
	[INFO] 10.244.1.2:38411 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001653266s
	[INFO] 10.244.3.2:56375 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004343685s
	[INFO] 10.244.3.2:54377 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172651s
	[INFO] 10.244.3.2:43180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159789s
	[INFO] 10.244.0.4:37709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00025493s
	[INFO] 10.244.0.4:58355 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670657s
	[INFO] 10.244.0.4:38422 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110468s
	[INFO] 10.244.1.2:46631 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172109s
	[INFO] 10.244.1.2:34300 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148188s
	[INFO] 10.244.1.2:48603 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001490904s
	[INFO] 10.244.1.2:53797 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095174s
	[INFO] 10.244.3.2:58169 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290075s
	[INFO] 10.244.3.2:32925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114361s
	[INFO] 10.244.0.4:36730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135132s
	[INFO] 10.244.0.4:34478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076546s
	[INFO] 10.244.1.2:55703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157241s
	[INFO] 10.244.1.2:60121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228732s
	[INFO] 10.244.1.2:38242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131949s
	[INFO] 10.244.3.2:38185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132157s
	[INFO] 10.244.3.2:36830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000264113s
	[INFO] 10.244.3.2:49645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196302s
	[INFO] 10.244.0.4:60935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119291s
	[INFO] 10.244.1.2:60943 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082071s
	[INFO] 10.244.1.2:49207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009839s
	[INFO] 10.244.1.2:41020 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060198s
	
	
	==> coredns [f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73] <==
	[INFO] 10.244.1.2:46592 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000089614s
	[INFO] 10.244.3.2:46869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163193s
	[INFO] 10.244.3.2:43702 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000341814s
	[INFO] 10.244.3.2:48838 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007572196s
	[INFO] 10.244.3.2:58405 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145303s
	[INFO] 10.244.3.2:57228 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000229422s
	[INFO] 10.244.0.4:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013812s
	[INFO] 10.244.0.4:39901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001988121s
	[INFO] 10.244.0.4:50914 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026063s
	[INFO] 10.244.0.4:38018 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084673s
	[INFO] 10.244.0.4:49421 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097844s
	[INFO] 10.244.1.2:35174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112144s
	[INFO] 10.244.1.2:45641 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742655s
	[INFO] 10.244.1.2:42943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126184s
	[INFO] 10.244.1.2:48539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090774s
	[INFO] 10.244.3.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115681s
	[INFO] 10.244.3.2:42854 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129882s
	[INFO] 10.244.0.4:47863 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135193s
	[INFO] 10.244.0.4:54893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107279s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200409s
	[INFO] 10.244.3.2:36127 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178104s
	[INFO] 10.244.0.4:56439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119423s
	[INFO] 10.244.0.4:57332 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122479s
	[INFO] 10.244.0.4:54257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113812s
	[INFO] 10.244.1.2:47781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122756s
	
	
	==> describe nodes <==
	Name:               ha-475401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:03:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:59:40 +0000   Thu, 12 Sep 2024 21:56:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-475401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f28b923154b09a761fb2715e95e75
	  System UUID:                a21f28b9-2315-4b09-a761-fb2715e95e75
	  Boot ID:                    719d19bb-1949-4b62-be49-e032ba422c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l2hdm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 coredns-7c65d6cfc9-pzsv8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m9s
	  kube-system                 coredns-7c65d6cfc9-xhdj7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m9s
	  kube-system                 etcd-ha-475401                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m16s
	  kube-system                 kindnet-cbfm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m9s
	  kube-system                 kube-apiserver-ha-475401             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-controller-manager-ha-475401    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-proxy-4bk97                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-scheduler-ha-475401             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-vip-ha-475401                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m21s (x3 over 7m21s)  kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m21s (x4 over 7m21s)  kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s (x3 over 7m21s)  kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m14s                  kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s                  kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s                  kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m10s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal  NodeReady                6m58s                  kubelet          Node ha-475401 status is now: NodeReady
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	
	
	Name:               ha-475401-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:57:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:00:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Sep 2024 21:59:29 +0000   Thu, 12 Sep 2024 22:01:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-475401-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e177a4c02d5494a80aacc759f5d8434
	  System UUID:                5e177a4c-02d5-494a-80aa-cc759f5d8434
	  Boot ID:                    f35a4238-f901-4ec4-9e96-2614c319a75c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t7gjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 etcd-ha-475401-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-k4q6l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-475401-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-controller-manager-ha-475401-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-68h98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-475401-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-475401-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     6m24s                  cidrAllocator    Node ha-475401-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-475401-m02 status is now: NodeNotReady
	
	
	Name:               ha-475401-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_58_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:58:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:03:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:58:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:59:42 +0000   Thu, 12 Sep 2024 21:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    ha-475401-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cd0b17595342b5a867ee3ae4e5e5f6
	  System UUID:                28cd0b17-5953-42b5-a867-ee3ae4e5e5f6
	  Boot ID:                    91d84a4f-cdff-4c08-9b34-e4ce726e8b2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gb2hg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 etcd-ha-475401-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m9s
	  kube-system                 kindnet-bh5lg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m11s
	  kube-system                 kube-apiserver-ha-475401-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-ha-475401-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-5f8z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-scheduler-ha-475401-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-vip-ha-475401-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     5m11s                  cidrAllocator    Node ha-475401-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-475401-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	
	
	Name:               ha-475401-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_59_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:59:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:03:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 21:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:00:15 +0000   Thu, 12 Sep 2024 22:00:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-475401-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864edb6a0d14b6abd1a66cf5ac88479
	  System UUID:                9864edb6-a0d1-4b6a-bd1a-66cf5ac88479
	  Boot ID:                    75fc7899-e81c-48a9-bb6d-88d5b2ac6d2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2bvcz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-proxy-bmv9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-475401-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m6s (x2 over 4m6s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x2 over 4m6s)  kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x2 over 4m6s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal  NodeReady                3m46s                kubelet          Node ha-475401-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep12 21:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051358] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038808] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep12 21:56] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929148] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.546825] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.020585] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063471] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182960] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.109592] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.292147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.769780] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +5.095538] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.058539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.038747] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.092804] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.235155] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.799100] kauditd_printk_skb: 38 callbacks suppressed
	[Sep12 21:57] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9] <==
	{"level":"warn","ts":"2024-09-12T22:03:50.587240Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.591576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.641681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.742434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.743254Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.942008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.945459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.953732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.960243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.963775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.967676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.978816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.985152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.990758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.995070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:50.999021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.005224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.010636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.015736Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.018948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.021848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.024922Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.030685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.036898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:03:51.041881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:03:51 up 7 min,  0 users,  load average: 0.20, 0.29, 0.15
	Linux ha-475401 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8] <==
	I0912 22:03:12.854830       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:03:22.854521       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:03:22.854561       1 main.go:299] handling current node
	I0912 22:03:22.854575       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:03:22.854580       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:03:22.854745       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:03:22.854764       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:03:22.854818       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:03:22.854833       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:03:32.853741       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:03:32.853832       1 main.go:299] handling current node
	I0912 22:03:32.853847       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:03:32.853876       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:03:32.854167       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:03:32.854188       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:03:32.854293       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:03:32.854346       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:03:42.853977       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:03:42.854022       1 main.go:299] handling current node
	I0912 22:03:42.854036       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:03:42.854041       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:03:42.854211       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:03:42.854233       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:03:42.854297       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:03:42.854327       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [df088d2d1a92a20915c4eb7c56ddd1b9b1567da26947b41d293391935823e69f] <==
	W0912 21:56:35.357461       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203]
	I0912 21:56:35.359440       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 21:56:35.365339       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:56:35.381525       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 21:56:36.555399       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 21:56:36.573443       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0912 21:56:36.587621       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 21:56:40.903282       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0912 21:56:41.130589       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0912 21:59:14.641227       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45930: use of closed network connection
	E0912 21:59:14.824837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45956: use of closed network connection
	E0912 21:59:15.017466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45974: use of closed network connection
	E0912 21:59:15.217984       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46000: use of closed network connection
	E0912 21:59:15.419617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46006: use of closed network connection
	E0912 21:59:15.613852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46022: use of closed network connection
	E0912 21:59:15.788040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46036: use of closed network connection
	E0912 21:59:15.970968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46056: use of closed network connection
	E0912 21:59:16.162364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46086: use of closed network connection
	E0912 21:59:16.481705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46106: use of closed network connection
	E0912 21:59:16.664271       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46118: use of closed network connection
	E0912 21:59:16.857842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46144: use of closed network connection
	E0912 21:59:17.034650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46156: use of closed network connection
	E0912 21:59:17.211549       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46184: use of closed network connection
	E0912 21:59:17.374238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46212: use of closed network connection
	W0912 22:00:45.362151       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.203]
	
	
	==> kube-controller-manager [4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4] <==
	I0912 21:59:45.408505       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-475401-m04" podCIDRs=["10.244.4.0/24"]
	I0912 21:59:45.408564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.408593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.433781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:45.667788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:46.032045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.356349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.631388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:49.671688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:50.824020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:50.824451       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-475401-m04"
	I0912 21:59:50.963958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 21:59:55.640919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.461083       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-475401-m04"
	I0912 22:00:05.461773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.479787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:05.838733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:00:15.943687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:01:00.863660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-475401-m04"
	I0912 22:01:00.864991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:00.886598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:00.923247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.618002ms"
	I0912 22:01:00.923639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.014µs"
	I0912 22:01:04.362480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:01:06.158719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	
	
	==> kube-proxy [0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 21:56:41.912206       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 21:56:41.930592       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0912 21:56:41.930824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:56:41.968340       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 21:56:41.968379       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:56:41.968403       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:56:41.971058       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:56:41.971979       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:56:41.972047       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:56:41.974515       1 config.go:199] "Starting service config controller"
	I0912 21:56:41.975031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:56:41.975346       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:56:41.975390       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:56:41.976593       1 config.go:328] "Starting node config controller"
	I0912 21:56:41.976636       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:56:42.075847       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:56:42.076026       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:56:42.077390       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818] <==
	W0912 21:56:34.795279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:56:34.795388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:56:37.025887       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 21:58:40.723691       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bh5lg\": pod kindnet-bh5lg is already assigned to node \"ha-475401-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-bh5lg" node="ha-475401-m03"
	E0912 21:58:40.723871       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee20dbb3-9e3e-4ad6-b3f2-1ec4523b46ca(kube-system/kindnet-bh5lg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bh5lg"
	E0912 21:58:40.723922       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bh5lg\": pod kindnet-bh5lg is already assigned to node \"ha-475401-m03\"" pod="kube-system/kindnet-bh5lg"
	I0912 21:58:40.723960       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bh5lg" node="ha-475401-m03"
	E0912 21:59:09.626808       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gb2hg\": pod busybox-7dff88458-gb2hg is already assigned to node \"ha-475401-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gb2hg" node="ha-475401-m02"
	E0912 21:59:09.626992       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gb2hg\": pod busybox-7dff88458-gb2hg is already assigned to node \"ha-475401-m03\"" pod="default/busybox-7dff88458-gb2hg"
	E0912 21:59:09.679559       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-l2hdm\": pod busybox-7dff88458-l2hdm is already assigned to node \"ha-475401\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-l2hdm" node="ha-475401"
	E0912 21:59:09.679624       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ab651ae-e8a0-438a-8bf6-4462c8304466(default/busybox-7dff88458-l2hdm) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-l2hdm"
	E0912 21:59:09.679642       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-l2hdm\": pod busybox-7dff88458-l2hdm is already assigned to node \"ha-475401\"" pod="default/busybox-7dff88458-l2hdm"
	I0912 21:59:09.679663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-l2hdm" node="ha-475401"
	E0912 21:59:09.680271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t7gjx\": pod busybox-7dff88458-t7gjx is already assigned to node \"ha-475401-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t7gjx" node="ha-475401-m02"
	E0912 21:59:09.680327       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8634b0f8-3ad9-4f13-bc5d-4c6c05db092f(default/busybox-7dff88458-t7gjx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-t7gjx"
	E0912 21:59:09.680345       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t7gjx\": pod busybox-7dff88458-t7gjx is already assigned to node \"ha-475401-m02\"" pod="default/busybox-7dff88458-t7gjx"
	I0912 21:59:09.680365       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t7gjx" node="ha-475401-m02"
	E0912 21:59:45.487339       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	E0912 21:59:45.491176       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21f2175a-f898-4059-ae91-9df7019f8cdb(kube-system/kube-proxy-fvw4x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.492064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.490969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	E0912 21:59:45.493554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d40bd7a6-62a0-4e2d-b6eb-2ec57e8eea0f(kube-system/kindnet-2bvcz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2bvcz"
	E0912 21:59:45.493577       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" pod="kube-system/kindnet-2bvcz"
	I0912 21:59:45.493620       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	I0912 21:59:45.493727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	
	
	==> kubelet <==
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:02:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:02:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:02:36 ha-475401 kubelet[1305]: E0912 22:02:36.624871    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178556624300735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:36 ha-475401 kubelet[1305]: E0912 22:02:36.625027    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178556624300735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:46 ha-475401 kubelet[1305]: E0912 22:02:46.627517    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178566627012645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:46 ha-475401 kubelet[1305]: E0912 22:02:46.627565    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178566627012645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:56 ha-475401 kubelet[1305]: E0912 22:02:56.629443    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178576629021204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:02:56 ha-475401 kubelet[1305]: E0912 22:02:56.629467    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178576629021204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:06 ha-475401 kubelet[1305]: E0912 22:03:06.630842    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178586630595768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:06 ha-475401 kubelet[1305]: E0912 22:03:06.630886    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178586630595768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:16 ha-475401 kubelet[1305]: E0912 22:03:16.632246    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178596631973566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:16 ha-475401 kubelet[1305]: E0912 22:03:16.632310    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178596631973566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:26 ha-475401 kubelet[1305]: E0912 22:03:26.635430    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178606635063636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:26 ha-475401 kubelet[1305]: E0912 22:03:26.635552    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178606635063636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:36 ha-475401 kubelet[1305]: E0912 22:03:36.499462    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:03:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:03:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:03:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:03:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:03:36 ha-475401 kubelet[1305]: E0912 22:03:36.637284    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178616636667590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:36 ha-475401 kubelet[1305]: E0912 22:03:36.637311    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178616636667590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:46 ha-475401 kubelet[1305]: E0912 22:03:46.640632    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178626638987082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:03:46 ha-475401 kubelet[1305]: E0912 22:03:46.640674    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178626638987082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-475401 -n ha-475401
helpers_test.go:261: (dbg) Run:  kubectl --context ha-475401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-475401 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-475401 -v=7 --alsologtostderr
E0912 22:05:05.704309   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:33.408916   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-475401 -v=7 --alsologtostderr: exit status 82 (2m1.814855923s)

                                                
                                                
-- stdout --
	* Stopping node "ha-475401-m04"  ...
	* Stopping node "ha-475401-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:52.491057   31504 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:52.491220   31504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:52.491232   31504 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:52.491240   31504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:52.491446   31504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:03:52.491737   31504 out.go:352] Setting JSON to false
	I0912 22:03:52.491850   31504 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:52.492245   31504 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:52.492356   31504 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 22:03:52.492573   31504 mustload.go:65] Loading cluster: ha-475401
	I0912 22:03:52.492755   31504 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:03:52.492801   31504 stop.go:39] StopHost: ha-475401-m04
	I0912 22:03:52.493230   31504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:52.493285   31504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:52.508944   31504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0912 22:03:52.509524   31504 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:52.510144   31504 main.go:141] libmachine: Using API Version  1
	I0912 22:03:52.510177   31504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:52.510544   31504 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:52.512762   31504 out.go:177] * Stopping node "ha-475401-m04"  ...
	I0912 22:03:52.514005   31504 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:03:52.514043   31504 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:03:52.514306   31504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:03:52.514327   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:03:52.517274   31504 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:52.517727   31504 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:59:32 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:03:52.517755   31504 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:03:52.517912   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:03:52.518164   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:03:52.518343   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:03:52.518474   31504 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:03:52.600109   31504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:03:52.653567   31504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:03:52.706754   31504 main.go:141] libmachine: Stopping "ha-475401-m04"...
	I0912 22:03:52.706781   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:52.708626   31504 main.go:141] libmachine: (ha-475401-m04) Calling .Stop
	I0912 22:03:52.712078   31504 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 0/120
	I0912 22:03:53.830894   31504 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:03:53.832327   31504 main.go:141] libmachine: Machine "ha-475401-m04" was stopped.
	I0912 22:03:53.832352   31504 stop.go:75] duration metric: took 1.31834673s to stop
	I0912 22:03:53.832376   31504 stop.go:39] StopHost: ha-475401-m03
	I0912 22:03:53.832724   31504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:03:53.832767   31504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:03:53.848016   31504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0912 22:03:53.848592   31504 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:03:53.849069   31504 main.go:141] libmachine: Using API Version  1
	I0912 22:03:53.849089   31504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:03:53.849411   31504 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:03:53.853559   31504 out.go:177] * Stopping node "ha-475401-m03"  ...
	I0912 22:03:53.854984   31504 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:03:53.855023   31504 main.go:141] libmachine: (ha-475401-m03) Calling .DriverName
	I0912 22:03:53.855358   31504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:03:53.855393   31504 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHHostname
	I0912 22:03:53.859039   31504 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:53.859472   31504 main.go:141] libmachine: (ha-475401-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:aa:da", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:58:06 +0000 UTC Type:0 Mac:52:54:00:21:aa:da Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-475401-m03 Clientid:01:52:54:00:21:aa:da}
	I0912 22:03:53.859504   31504 main.go:141] libmachine: (ha-475401-m03) DBG | domain ha-475401-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:21:aa:da in network mk-ha-475401
	I0912 22:03:53.859692   31504 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHPort
	I0912 22:03:53.859893   31504 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHKeyPath
	I0912 22:03:53.860046   31504 main.go:141] libmachine: (ha-475401-m03) Calling .GetSSHUsername
	I0912 22:03:53.860207   31504 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m03/id_rsa Username:docker}
	I0912 22:03:53.947622   31504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:03:54.004930   31504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:03:54.059208   31504 main.go:141] libmachine: Stopping "ha-475401-m03"...
	I0912 22:03:54.059237   31504 main.go:141] libmachine: (ha-475401-m03) Calling .GetState
	I0912 22:03:54.060917   31504 main.go:141] libmachine: (ha-475401-m03) Calling .Stop
	I0912 22:03:54.064557   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 0/120
	I0912 22:03:55.066264   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 1/120
	I0912 22:03:56.067748   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 2/120
	I0912 22:03:57.069194   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 3/120
	I0912 22:03:58.070547   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 4/120
	I0912 22:03:59.072256   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 5/120
	I0912 22:04:00.073786   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 6/120
	I0912 22:04:01.076144   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 7/120
	I0912 22:04:02.077458   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 8/120
	I0912 22:04:03.079464   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 9/120
	I0912 22:04:04.081574   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 10/120
	I0912 22:04:05.083403   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 11/120
	I0912 22:04:06.085281   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 12/120
	I0912 22:04:07.086808   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 13/120
	I0912 22:04:08.088588   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 14/120
	I0912 22:04:09.090503   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 15/120
	I0912 22:04:10.091938   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 16/120
	I0912 22:04:11.093355   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 17/120
	I0912 22:04:12.095033   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 18/120
	I0912 22:04:13.096635   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 19/120
	I0912 22:04:14.098928   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 20/120
	I0912 22:04:15.100417   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 21/120
	I0912 22:04:16.102145   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 22/120
	I0912 22:04:17.103541   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 23/120
	I0912 22:04:18.104907   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 24/120
	I0912 22:04:19.106755   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 25/120
	I0912 22:04:20.108121   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 26/120
	I0912 22:04:21.109656   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 27/120
	I0912 22:04:22.111243   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 28/120
	I0912 22:04:23.112828   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 29/120
	I0912 22:04:24.114843   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 30/120
	I0912 22:04:25.116288   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 31/120
	I0912 22:04:26.117845   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 32/120
	I0912 22:04:27.119296   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 33/120
	I0912 22:04:28.120903   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 34/120
	I0912 22:04:29.122848   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 35/120
	I0912 22:04:30.124173   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 36/120
	I0912 22:04:31.125645   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 37/120
	I0912 22:04:32.127012   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 38/120
	I0912 22:04:33.128972   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 39/120
	I0912 22:04:34.131055   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 40/120
	I0912 22:04:35.132522   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 41/120
	I0912 22:04:36.134115   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 42/120
	I0912 22:04:37.135614   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 43/120
	I0912 22:04:38.136995   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 44/120
	I0912 22:04:39.138778   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 45/120
	I0912 22:04:40.140149   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 46/120
	I0912 22:04:41.141931   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 47/120
	I0912 22:04:42.144121   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 48/120
	I0912 22:04:43.145394   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 49/120
	I0912 22:04:44.146871   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 50/120
	I0912 22:04:45.148242   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 51/120
	I0912 22:04:46.149564   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 52/120
	I0912 22:04:47.151095   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 53/120
	I0912 22:04:48.152314   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 54/120
	I0912 22:04:49.154072   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 55/120
	I0912 22:04:50.155369   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 56/120
	I0912 22:04:51.156616   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 57/120
	I0912 22:04:52.158051   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 58/120
	I0912 22:04:53.159644   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 59/120
	I0912 22:04:54.161789   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 60/120
	I0912 22:04:55.164300   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 61/120
	I0912 22:04:56.165787   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 62/120
	I0912 22:04:57.167209   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 63/120
	I0912 22:04:58.168539   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 64/120
	I0912 22:04:59.169988   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 65/120
	I0912 22:05:00.171974   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 66/120
	I0912 22:05:01.173331   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 67/120
	I0912 22:05:02.174748   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 68/120
	I0912 22:05:03.176119   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 69/120
	I0912 22:05:04.177967   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 70/120
	I0912 22:05:05.179421   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 71/120
	I0912 22:05:06.181185   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 72/120
	I0912 22:05:07.182670   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 73/120
	I0912 22:05:08.183916   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 74/120
	I0912 22:05:09.185839   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 75/120
	I0912 22:05:10.187298   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 76/120
	I0912 22:05:11.188699   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 77/120
	I0912 22:05:12.190207   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 78/120
	I0912 22:05:13.191643   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 79/120
	I0912 22:05:14.193597   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 80/120
	I0912 22:05:15.195268   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 81/120
	I0912 22:05:16.196588   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 82/120
	I0912 22:05:17.197922   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 83/120
	I0912 22:05:18.199193   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 84/120
	I0912 22:05:19.201160   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 85/120
	I0912 22:05:20.203174   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 86/120
	I0912 22:05:21.204618   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 87/120
	I0912 22:05:22.206019   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 88/120
	I0912 22:05:23.207592   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 89/120
	I0912 22:05:24.210125   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 90/120
	I0912 22:05:25.212208   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 91/120
	I0912 22:05:26.213766   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 92/120
	I0912 22:05:27.216050   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 93/120
	I0912 22:05:28.217641   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 94/120
	I0912 22:05:29.219925   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 95/120
	I0912 22:05:30.221377   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 96/120
	I0912 22:05:31.222728   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 97/120
	I0912 22:05:32.224024   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 98/120
	I0912 22:05:33.225324   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 99/120
	I0912 22:05:34.227113   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 100/120
	I0912 22:05:35.228917   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 101/120
	I0912 22:05:36.230473   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 102/120
	I0912 22:05:37.231824   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 103/120
	I0912 22:05:38.233147   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 104/120
	I0912 22:05:39.234869   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 105/120
	I0912 22:05:40.236440   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 106/120
	I0912 22:05:41.238347   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 107/120
	I0912 22:05:42.239823   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 108/120
	I0912 22:05:43.241392   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 109/120
	I0912 22:05:44.243932   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 110/120
	I0912 22:05:45.245244   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 111/120
	I0912 22:05:46.246724   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 112/120
	I0912 22:05:47.248096   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 113/120
	I0912 22:05:48.249459   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 114/120
	I0912 22:05:49.250956   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 115/120
	I0912 22:05:50.252396   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 116/120
	I0912 22:05:51.253747   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 117/120
	I0912 22:05:52.255126   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 118/120
	I0912 22:05:53.256580   31504 main.go:141] libmachine: (ha-475401-m03) Waiting for machine to stop 119/120
	I0912 22:05:54.257446   31504 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:05:54.257511   31504 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0912 22:05:54.259747   31504 out.go:201] 
	W0912 22:05:54.261072   31504 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0912 22:05:54.261085   31504 out.go:270] * 
	* 
	W0912 22:05:54.263577   31504 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:05:54.264779   31504 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-475401 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-475401 --wait=true -v=7 --alsologtostderr
E0912 22:07:07.200502   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-475401 --wait=true -v=7 --alsologtostderr: (4m7.327571283s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-475401
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-475401 -n ha-475401
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-475401 logs -n 25: (1.978090141s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m04 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp testdata/cp-test.txt                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m04_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03:/home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m03 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-475401 node stop m02 -v=7                                                     | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-475401 node start m02 -v=7                                                    | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-475401 -v=7                                                           | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-475401 -v=7                                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-475401 --wait=true -v=7                                                    | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:05 UTC | 12 Sep 24 22:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-475401                                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:10 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:05:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:05:54.308256   31965 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:05:54.308402   31965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:54.308415   31965 out.go:358] Setting ErrFile to fd 2...
	I0912 22:05:54.308422   31965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:54.308856   31965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:05:54.309456   31965 out.go:352] Setting JSON to false
	I0912 22:05:54.310456   31965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2896,"bootTime":1726175858,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:05:54.310519   31965 start.go:139] virtualization: kvm guest
	I0912 22:05:54.312895   31965 out.go:177] * [ha-475401] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:05:54.314120   31965 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:05:54.314144   31965 notify.go:220] Checking for updates...
	I0912 22:05:54.316741   31965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:05:54.318263   31965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:05:54.319814   31965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:05:54.321183   31965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:05:54.322460   31965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:05:54.324240   31965 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:05:54.324330   31965 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:05:54.324718   31965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:05:54.324776   31965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:05:54.340147   31965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I0912 22:05:54.340668   31965 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:05:54.341186   31965 main.go:141] libmachine: Using API Version  1
	I0912 22:05:54.341205   31965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:05:54.341559   31965 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:05:54.341798   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.379340   31965 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:05:54.380592   31965 start.go:297] selected driver: kvm2
	I0912 22:05:54.380614   31965 start.go:901] validating driver "kvm2" against &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:05:54.380762   31965 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:05:54.381236   31965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:05:54.381320   31965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:05:54.396424   31965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:05:54.397109   31965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:05:54.397199   31965 cni.go:84] Creating CNI manager for ""
	I0912 22:05:54.397214   31965 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 22:05:54.397304   31965 start.go:340] cluster config:
	{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:05:54.397444   31965 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:05:54.400291   31965 out.go:177] * Starting "ha-475401" primary control-plane node in "ha-475401" cluster
	I0912 22:05:54.401651   31965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:05:54.401689   31965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:05:54.401698   31965 cache.go:56] Caching tarball of preloaded images
	I0912 22:05:54.401762   31965 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:05:54.401773   31965 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 22:05:54.401892   31965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 22:05:54.402082   31965 start.go:360] acquireMachinesLock for ha-475401: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:05:54.402123   31965 start.go:364] duration metric: took 23.908µs to acquireMachinesLock for "ha-475401"
	I0912 22:05:54.402136   31965 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:05:54.402142   31965 fix.go:54] fixHost starting: 
	I0912 22:05:54.402408   31965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:05:54.402435   31965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:05:54.416855   31965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0912 22:05:54.417279   31965 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:05:54.417760   31965 main.go:141] libmachine: Using API Version  1
	I0912 22:05:54.417796   31965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:05:54.418125   31965 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:05:54.418293   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.418467   31965 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:05:54.420388   31965 fix.go:112] recreateIfNeeded on ha-475401: state=Running err=<nil>
	W0912 22:05:54.420414   31965 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 22:05:54.422325   31965 out.go:177] * Updating the running kvm2 "ha-475401" VM ...
	I0912 22:05:54.423590   31965 machine.go:93] provisionDockerMachine start ...
	I0912 22:05:54.423612   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.423841   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.426690   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.427140   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.427174   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.427293   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.427533   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.427702   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.427881   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.428114   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.428317   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.428327   31965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:05:54.551333   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 22:05:54.551374   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.551693   31965 buildroot.go:166] provisioning hostname "ha-475401"
	I0912 22:05:54.551715   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.551979   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.555806   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.556355   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.556383   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.556598   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.556825   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.556995   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.557167   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.557355   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.557515   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.557528   31965 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401 && echo "ha-475401" | sudo tee /etc/hostname
	I0912 22:05:54.685807   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 22:05:54.685833   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.688862   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.689230   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.689272   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.689458   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.689659   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.689821   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.689956   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.690172   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.690320   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.690337   31965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:05:54.806548   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:05:54.806581   31965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:05:54.806614   31965 buildroot.go:174] setting up certificates
	I0912 22:05:54.806628   31965 provision.go:84] configureAuth start
	I0912 22:05:54.806642   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.806925   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:05:54.809452   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.809877   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.809917   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.810060   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.812538   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.812946   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.812972   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.813110   31965 provision.go:143] copyHostCerts
	I0912 22:05:54.813152   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:05:54.813184   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:05:54.813195   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:05:54.813259   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:05:54.813335   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:05:54.813354   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:05:54.813359   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:05:54.813383   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:05:54.813422   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:05:54.813438   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:05:54.813444   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:05:54.813463   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:05:54.813507   31965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401 san=[127.0.0.1 192.168.39.203 ha-475401 localhost minikube]
	I0912 22:05:54.918391   31965 provision.go:177] copyRemoteCerts
	I0912 22:05:54.918443   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:05:54.918464   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.921345   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.921776   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.921807   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.921990   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.922164   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.922386   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.922559   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:05:55.011813   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:05:55.011876   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:05:55.037020   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:05:55.037089   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0912 22:05:55.063153   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:05:55.063233   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:05:55.090797   31965 provision.go:87] duration metric: took 284.151321ms to configureAuth
	I0912 22:05:55.090827   31965 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:05:55.091088   31965 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:05:55.091170   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:55.093647   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:55.094052   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:55.094083   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:55.094307   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:55.094503   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:55.094690   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:55.094855   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:55.095009   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:55.095239   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:55.095256   31965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:07:26.037707   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:07:26.037733   31965 machine.go:96] duration metric: took 1m31.61412699s to provisionDockerMachine
	I0912 22:07:26.037743   31965 start.go:293] postStartSetup for "ha-475401" (driver="kvm2")
	I0912 22:07:26.037754   31965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:07:26.037767   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.038127   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:07:26.038151   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.041250   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.041769   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.041804   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.041979   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.042192   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.042428   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.042601   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.130491   31965 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:07:26.134534   31965 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:07:26.134555   31965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:07:26.134636   31965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:07:26.134728   31965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:07:26.134739   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 22:07:26.134858   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:07:26.144354   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:07:26.166753   31965 start.go:296] duration metric: took 128.997755ms for postStartSetup
	I0912 22:07:26.166795   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.167111   31965 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0912 22:07:26.167141   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.169926   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.170340   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.170369   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.170515   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.170720   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.170883   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.171029   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	W0912 22:07:26.257016   31965 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0912 22:07:26.257042   31965 fix.go:56] duration metric: took 1m31.854898899s for fixHost
	I0912 22:07:26.257067   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.259659   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.260072   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.260098   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.260257   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.260447   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.260691   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.260870   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.261020   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:07:26.261241   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:07:26.261258   31965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:07:26.374318   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178846.333397263
	
	I0912 22:07:26.374349   31965 fix.go:216] guest clock: 1726178846.333397263
	I0912 22:07:26.374360   31965 fix.go:229] Guest: 2024-09-12 22:07:26.333397263 +0000 UTC Remote: 2024-09-12 22:07:26.257051086 +0000 UTC m=+91.983184381 (delta=76.346177ms)
	I0912 22:07:26.374388   31965 fix.go:200] guest clock delta is within tolerance: 76.346177ms
	I0912 22:07:26.374405   31965 start.go:83] releasing machines lock for "ha-475401", held for 1m31.972271979s
	I0912 22:07:26.374432   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.374692   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:07:26.377314   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.377693   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.377722   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.377834   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378357   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378569   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378699   31965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:07:26.378736   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.378865   31965 ssh_runner.go:195] Run: cat /version.json
	I0912 22:07:26.378901   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.381589   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.381654   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382033   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.382060   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382089   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.382103   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382154   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.382336   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.382390   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.382475   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.382546   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.382623   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.382675   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.382854   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.495119   31965 ssh_runner.go:195] Run: systemctl --version
	I0912 22:07:26.501061   31965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:07:26.664433   31965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 22:07:26.669949   31965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:07:26.670015   31965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:07:26.679526   31965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:07:26.679555   31965 start.go:495] detecting cgroup driver to use...
	I0912 22:07:26.679622   31965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:07:26.698971   31965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:07:26.717299   31965 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:07:26.717369   31965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:07:26.732219   31965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:07:26.746688   31965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:07:26.919990   31965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:07:27.084592   31965 docker.go:233] disabling docker service ...
	I0912 22:07:27.084658   31965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:07:27.102083   31965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:07:27.115726   31965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:07:27.262053   31965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:07:27.406862   31965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:07:27.420194   31965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:07:27.438223   31965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 22:07:27.438289   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.449221   31965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:07:27.449305   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.459434   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.469427   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.479525   31965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:07:27.490732   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.501287   31965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.513138   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.523454   31965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:07:27.533717   31965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:07:27.543750   31965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:07:27.697172   31965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:07:27.923563   31965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:07:27.923650   31965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:07:27.929298   31965 start.go:563] Will wait 60s for crictl version
	I0912 22:07:27.929380   31965 ssh_runner.go:195] Run: which crictl
	I0912 22:07:27.933026   31965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:07:27.974820   31965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:07:27.974894   31965 ssh_runner.go:195] Run: crio --version
	I0912 22:07:28.004358   31965 ssh_runner.go:195] Run: crio --version
	I0912 22:07:28.036426   31965 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 22:07:28.038031   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:07:28.041462   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:28.042024   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:28.042055   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:28.042354   31965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:07:28.047267   31965 kubeadm.go:883] updating cluster {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:07:28.047416   31965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:07:28.047459   31965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:07:28.096154   31965 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:07:28.096177   31965 crio.go:433] Images already preloaded, skipping extraction
	I0912 22:07:28.096221   31965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:07:28.134173   31965 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:07:28.134194   31965 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:07:28.134203   31965 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0912 22:07:28.134314   31965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:07:28.134383   31965 ssh_runner.go:195] Run: crio config
	I0912 22:07:28.181792   31965 cni.go:84] Creating CNI manager for ""
	I0912 22:07:28.181819   31965 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 22:07:28.181830   31965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:07:28.181858   31965 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-475401 NodeName:ha-475401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:07:28.182005   31965 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-475401"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:07:28.182035   31965 kube-vip.go:115] generating kube-vip config ...
	I0912 22:07:28.182075   31965 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 22:07:28.193639   31965 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 22:07:28.193786   31965 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 22:07:28.193852   31965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:07:28.203866   31965 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:07:28.203941   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0912 22:07:28.214283   31965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0912 22:07:28.231262   31965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:07:28.248224   31965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0912 22:07:28.265836   31965 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 22:07:28.282487   31965 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 22:07:28.287274   31965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:07:28.436508   31965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:07:28.451645   31965 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.203
	I0912 22:07:28.451675   31965 certs.go:194] generating shared ca certs ...
	I0912 22:07:28.451696   31965 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.451860   31965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:07:28.451901   31965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:07:28.451909   31965 certs.go:256] generating profile certs ...
	I0912 22:07:28.451991   31965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 22:07:28.452018   31965 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f
	I0912 22:07:28.452039   31965 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.113 192.168.39.254]
	I0912 22:07:28.583568   31965 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f ...
	I0912 22:07:28.583606   31965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f: {Name:mkfee23c0cb253b22ce00c619242c3decf75e6d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.583870   31965 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f ...
	I0912 22:07:28.583895   31965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f: {Name:mka446876fe030cddcd2d9f5b61575e77d3b6f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.584005   31965 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 22:07:28.584196   31965 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 22:07:28.584452   31965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 22:07:28.584473   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:07:28.584489   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:07:28.584509   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:07:28.584524   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:07:28.584542   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 22:07:28.584560   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 22:07:28.584594   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 22:07:28.584614   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 22:07:28.584678   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:07:28.584721   31965 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:07:28.584735   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:07:28.584765   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:07:28.584798   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:07:28.584832   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:07:28.584886   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:07:28.584926   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:28.584973   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 22:07:28.584991   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.585563   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:07:28.611042   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:07:28.635208   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:07:28.659588   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:07:28.683242   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 22:07:28.706777   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:07:28.729689   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:07:28.751938   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:07:28.775841   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:07:28.799649   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:07:28.822651   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:07:28.851268   31965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:07:28.933439   31965 ssh_runner.go:195] Run: openssl version
	I0912 22:07:28.962187   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:07:28.975319   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.998140   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.998208   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:07:29.020666   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:07:29.049442   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:07:29.073325   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.116004   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.116064   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.187294   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:07:29.245295   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:07:29.337834   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.350119   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.350177   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.423318   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:07:29.452139   31965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:07:29.472664   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:07:29.496224   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:07:29.524629   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:07:29.548007   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:07:29.563185   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:07:29.613805   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:07:29.647961   31965 kubeadm.go:392] StartCluster: {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:07:29.648066   31965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:07:29.648129   31965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:07:29.829809   31965 cri.go:89] found id: "1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6"
	I0912 22:07:29.829839   31965 cri.go:89] found id: "1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a"
	I0912 22:07:29.829846   31965 cri.go:89] found id: "21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5"
	I0912 22:07:29.829851   31965 cri.go:89] found id: "e550a104b2f9042382f9e65726926c623fb8e868e373108175fc495c9dd64c8f"
	I0912 22:07:29.829855   31965 cri.go:89] found id: "b433fe13a2ac8127e75624cac8d8e0fcbfbca2ad39df047d1a05ed9ce6172dea"
	I0912 22:07:29.829860   31965 cri.go:89] found id: "9fbb04fa01cedb3e1e9ca48c8a9b7758dc67279fea5288ee919c6e0e30a20caa"
	I0912 22:07:29.829864   31965 cri.go:89] found id: "9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4"
	I0912 22:07:29.829869   31965 cri.go:89] found id: "f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73"
	I0912 22:07:29.829873   31965 cri.go:89] found id: "38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8"
	I0912 22:07:29.829882   31965 cri.go:89] found id: "0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49"
	I0912 22:07:29.829886   31965 cri.go:89] found id: "4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4"
	I0912 22:07:29.829891   31965 cri.go:89] found id: "17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818"
	I0912 22:07:29.829898   31965 cri.go:89] found id: "5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9"
	I0912 22:07:29.829902   31965 cri.go:89] found id: ""
	I0912 22:07:29.829953   31965 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.371519807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9ef60de-1d75-4abd-862d-91625e56a906 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.420670063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=627c1738-4412-48b1-9061-42c88a3f67dd name=/runtime.v1.RuntimeService/Version
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.420756539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=627c1738-4412-48b1-9061-42c88a3f67dd name=/runtime.v1.RuntimeService/Version
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.422167724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ac69627-30d8-4839-882e-b3ae60b84382 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.422595798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179002422571338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ac69627-30d8-4839-882e-b3ae60b84382 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.423250730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04217437-4762-4abd-8123-94f826f33a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.423319139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04217437-4762-4abd-8123-94f826f33a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.423695992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04217437-4762-4abd-8123-94f826f33a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.464221908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8d60950-02ab-4d6b-8c78-3157d2ee7af5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.464340778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8d60950-02ab-4d6b-8c78-3157d2ee7af5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.465503825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfde5064-69d2-4ed6-92be-69b763a41787 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.466609741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179002466579520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfde5064-69d2-4ed6-92be-69b763a41787 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.467580037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efcb9625-a3e2-44e2-9965-b8af79bb0920 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.467642665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efcb9625-a3e2-44e2-9965-b8af79bb0920 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.471891150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efcb9625-a3e2-44e2-9965-b8af79bb0920 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.484335927Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b4dae23-225d-4bd8-99d1-5b5c51f17dea name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.484582352Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178882637604766,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-475401,Uid:8f4f5605b5feab014ea95bd7273dc6e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726178864411962431,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4f5605b5feab014ea95bd7273dc6e8,kubernetes.io/config.seen: 2024-09-12T22:07:28.243556914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849004710236,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849001156880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848985645365,Labels:map[string]string{co
ntroller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848959348348,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727dab4c45bcae218296d690a83a,kubernetes.io/config
.seen: 2024-09-12T21:56:36.456630592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-475401,Uid:980ac58ccfb719847553bfe344364a50,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848952124878,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 980ac58ccfb719847553bfe344364a50,kubernetes.io/config.seen: 2024-09-12T21:56:36.456637908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7fc8738b-56e8-4024-afe7-b552c79dd3f2,Namespace:kube-
system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848937937177,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hos
tPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T21:56:52.968730435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848933011807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:36.456635522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b06
7fd,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848915259604,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-475401,Uid:6a77994c747e48492b9028f572619aa8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848905487532,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 6a77994c747e48492b9028f572619aa8,kubernetes.io/config.seen: 2024-09-12T21:56:36.456636946Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0b4dae23-225d-4bd8-99d1-5b5c51f17dea name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.485468004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=046a53d3-b9e4-41ec-87b4-6159b959f31c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.485543160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=046a53d3-b9e4-41ec-87b4-6159b959f31c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.485771004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:d1b73b70e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort
\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=046a53d3-b9e4-41ec-87b4-6159b959f31c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.604041996Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4c73510f-c2c6-427b-ae2a-b7d5764f96a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.604440133Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178882637604766,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-475401,Uid:8f4f5605b5feab014ea95bd7273dc6e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726178864411962431,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4f5605b5feab014ea95bd7273dc6e8,kubernetes.io/config.seen: 2024-09-12T22:07:28.243556914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849004710236,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849001156880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848985645365,Labels:map[string]string{co
ntroller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848959348348,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727dab4c45bcae218296d690a83a,kubernetes.io/config
.seen: 2024-09-12T21:56:36.456630592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-475401,Uid:980ac58ccfb719847553bfe344364a50,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848952124878,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 980ac58ccfb719847553bfe344364a50,kubernetes.io/config.seen: 2024-09-12T21:56:36.456637908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7fc8738b-56e8-4024-afe7-b552c79dd3f2,Namespace:kube-
system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848937937177,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hos
tPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T21:56:52.968730435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848933011807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:36.456635522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b06
7fd,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848915259604,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-475401,Uid:6a77994c747e48492b9028f572619aa8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848905487532,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 6a77994c747e48492b9028f572619aa8,kubernetes.io/config.seen: 2024-09-12T21:56:36.456636946Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178349973174937,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178214172601414,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178214165828617,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178201506933282,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178201480986781,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178190103684920,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:29.620494346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178190085057134,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727d
ab4c45bcae218296d690a83a,kubernetes.io/config.seen: 2024-09-12T21:56:29.620491290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4c73510f-c2c6-427b-ae2a-b7d5764f96a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.605345840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99058fc7-37a7-4a32-bb94-c6c3b14592c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.605407096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99058fc7-37a7-4a32-bb94-c6c3b14592c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:10:02 ha-475401 crio[3513]: time="2024-09-12 22:10:02.606206576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99058fc7-37a7-4a32-bb94-c6c3b14592c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	812fab18c031f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   64ef09d970faa       storage-provisioner
	08d058679eafb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   76c52cdf935b7       kube-apiserver-ha-475401
	3756c86b696c4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   c0d16f3576d89       kube-controller-manager-ha-475401
	bc3ce74e5d177       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   64ef09d970faa       storage-provisioner
	95577788efd07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   3e1c4cf813750       busybox-7dff88458-l2hdm
	693190f0090a9       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   4f98b6471e3d1       kube-vip-ha-475401
	3ef34e41bb3dd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   b4dbe4dcc4ddd       kube-proxy-4bk97
	d1b73b70e8ff1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   a2330c1240fe2       coredns-7c65d6cfc9-pzsv8
	28ed212daea64       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   e203b47f2bd01       kindnet-cbfm5
	1fb5957e8f938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   76d52315f9785       coredns-7c65d6cfc9-xhdj7
	21aea3da36602       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   1b8277469e46c       etcd-ha-475401
	7bd2f2d4b23f5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   559d32bfb4924       kube-scheduler-ha-475401
	1d31b278af3ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   c0d16f3576d89       kube-controller-manager-ha-475401
	21b27af5812da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   76c52cdf935b7       kube-apiserver-ha-475401
	607e14e475ce3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   7fe4fd6a828e2       busybox-7dff88458-l2hdm
	9b36db608ba8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   8b265e5bc9493       coredns-7c65d6cfc9-xhdj7
	f56ac218b5509       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   2fdeb00439622       coredns-7c65d6cfc9-pzsv8
	38d31aa5dc410       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   ef4f45d37668b       kindnet-cbfm5
	0891cec467fda       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   d58e93f3f447d       kube-proxy-4bk97
	17a4293d12cac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   17b7717a92942       kube-scheduler-ha-475401
	5008665ceb8c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   e980e3980d971       etcd-ha-475401
	
	
	==> coredns [1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[843388788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:07:38.628) (total time: 10002ms):
	Trace[843388788]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:07:48.630)
	Trace[843388788]: [10.00200247s] [10.00200247s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57008->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57008->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4] <==
	[INFO] 10.244.0.4:58355 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670657s
	[INFO] 10.244.0.4:38422 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110468s
	[INFO] 10.244.1.2:46631 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172109s
	[INFO] 10.244.1.2:34300 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148188s
	[INFO] 10.244.1.2:48603 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001490904s
	[INFO] 10.244.1.2:53797 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095174s
	[INFO] 10.244.3.2:58169 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290075s
	[INFO] 10.244.3.2:32925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114361s
	[INFO] 10.244.0.4:36730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135132s
	[INFO] 10.244.0.4:34478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076546s
	[INFO] 10.244.1.2:55703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157241s
	[INFO] 10.244.1.2:60121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228732s
	[INFO] 10.244.1.2:38242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131949s
	[INFO] 10.244.3.2:38185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132157s
	[INFO] 10.244.3.2:36830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000264113s
	[INFO] 10.244.3.2:49645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196302s
	[INFO] 10.244.0.4:60935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119291s
	[INFO] 10.244.1.2:60943 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082071s
	[INFO] 10.244.1.2:49207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009839s
	[INFO] 10.244.1.2:41020 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060198s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d1b73b70e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6] <==
	Trace[450409556]: [10.000924858s] [10.000924858s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[806161496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:07:41.266) (total time: 10913ms):
	Trace[806161496]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer 10913ms (22:07:52.180)
	Trace[806161496]: [10.913846394s] [10.913846394s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41700->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41700->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73] <==
	[INFO] 10.244.3.2:57228 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000229422s
	[INFO] 10.244.0.4:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013812s
	[INFO] 10.244.0.4:39901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001988121s
	[INFO] 10.244.0.4:50914 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026063s
	[INFO] 10.244.0.4:38018 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084673s
	[INFO] 10.244.0.4:49421 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097844s
	[INFO] 10.244.1.2:35174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112144s
	[INFO] 10.244.1.2:45641 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742655s
	[INFO] 10.244.1.2:42943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126184s
	[INFO] 10.244.1.2:48539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090774s
	[INFO] 10.244.3.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115681s
	[INFO] 10.244.3.2:42854 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129882s
	[INFO] 10.244.0.4:47863 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135193s
	[INFO] 10.244.0.4:54893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107279s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200409s
	[INFO] 10.244.3.2:36127 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178104s
	[INFO] 10.244.0.4:56439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119423s
	[INFO] 10.244.0.4:57332 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122479s
	[INFO] 10.244.0.4:54257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113812s
	[INFO] 10.244.1.2:47781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122756s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-475401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:09:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:08:25 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:08:25 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:08:25 +0000   Thu, 12 Sep 2024 21:56:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:08:25 +0000   Thu, 12 Sep 2024 21:56:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-475401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f28b923154b09a761fb2715e95e75
	  System UUID:                a21f28b9-2315-4b09-a761-fb2715e95e75
	  Boot ID:                    719d19bb-1949-4b62-be49-e032ba422c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l2hdm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-pzsv8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-xhdj7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-475401                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-cbfm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-475401             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-475401    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4bk97                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-475401             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-475401                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 107s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x4 over 13m)      kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-475401 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Warning  ContainerGCFailed        3m27s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m52s (x3 over 3m41s)  kubelet          Node ha-475401 status is now: NodeNotReady
	  Normal   RegisteredNode           112s                   node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           107s                   node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	
	
	Name:               ha-475401-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:57:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:10:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-475401-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e177a4c02d5494a80aacc759f5d8434
	  System UUID:                5e177a4c-02d5-494a-80aa-cc759f5d8434
	  Boot ID:                    dd9168b6-4831-47ab-97f7-c3a88c9853cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t7gjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-475401-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-k4q6l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-475401-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-475401-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-68h98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-475401-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-475401-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-475401-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  NodeNotReady             9m3s                   node-controller  Node ha-475401-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m13s (x7 over 2m13s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                   node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	
	
	Name:               ha-475401-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_58_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:58:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:09:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:09:36 +0000   Thu, 12 Sep 2024 22:09:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:09:36 +0000   Thu, 12 Sep 2024 22:09:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:09:36 +0000   Thu, 12 Sep 2024 22:09:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:09:36 +0000   Thu, 12 Sep 2024 22:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    ha-475401-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cd0b17595342b5a867ee3ae4e5e5f6
	  System UUID:                28cd0b17-5953-42b5-a867-ee3ae4e5e5f6
	  Boot ID:                    f384e287-1c16-4418-ac50-8c87f2ac0480
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gb2hg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-475401-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-bh5lg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-475401-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-475401-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-5f8z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-475401-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-475401-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 41s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-475401-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-475401-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	  Normal   NodeNotReady             72s                node-controller  Node ha-475401-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 58s                kubelet          Node ha-475401-m03 has been rebooted, boot id: f384e287-1c16-4418-ac50-8c87f2ac0480
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node ha-475401-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node ha-475401-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                58s                kubelet          Node ha-475401-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-475401-m03 event: Registered Node ha-475401-m03 in Controller
	
	
	Name:               ha-475401-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_59_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:59:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:09:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:09:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:09:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:09:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-475401-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864edb6a0d14b6abd1a66cf5ac88479
	  System UUID:                9864edb6-a0d1-4b6a-bd1a-66cf5ac88479
	  Boot ID:                    c747d5f3-f470-48b0-981b-da0fd4da75a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2bvcz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-bmv9m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-475401-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   NodeReady                9m58s              kubelet          Node ha-475401-m04 status is now: NodeReady
	  Normal   RegisteredNode           112s               node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   NodeNotReady             72s                node-controller  Node ha-475401-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-475401-m04 has been rebooted, boot id: c747d5f3-f470-48b0-981b-da0fd4da75a4
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-475401-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                 kubelet          Node ha-475401-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.020585] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063471] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182960] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.109592] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.292147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.769780] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +5.095538] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.058539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.038747] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.092804] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.235155] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.799100] kauditd_printk_skb: 38 callbacks suppressed
	[Sep12 21:57] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 22:07] systemd-fstab-generator[3437]: Ignoring "noauto" option for root device
	[  +0.178924] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.180969] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.150971] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.276098] systemd-fstab-generator[3503]: Ignoring "noauto" option for root device
	[  +0.745905] systemd-fstab-generator[3601]: Ignoring "noauto" option for root device
	[ +13.797154] kauditd_printk_skb: 217 callbacks suppressed
	[ +10.069875] kauditd_printk_skb: 1 callbacks suppressed
	[Sep12 22:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.463103] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6] <==
	{"level":"warn","ts":"2024-09-12T22:09:00.254724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:09:00.354533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"28dd8e6bbca035f5","from":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-12T22:09:00.712713Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-12T22:09:00.723219Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-12T22:09:03.080992Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.113:2380/version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:03.081056Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:05.715281Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:05.724460Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:07.083417Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.113:2380/version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:07.083482Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:10.715735Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:10.725079Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:11.085294Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.113:2380/version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:11.085357Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:15.087795Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.113:2380/version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:15.087857Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"344afae425714cc4","error":"Get \"https://192.168.39.113:2380/version\": dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:15.716290Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-12T22:09:15.725579Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"344afae425714cc4","rtt":"0s","error":"dial tcp 192.168.39.113:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-12T22:09:17.049545Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.049669Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.049741Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.056446Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"28dd8e6bbca035f5","to":"344afae425714cc4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-12T22:09:17.056613Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.067734Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"28dd8e6bbca035f5","to":"344afae425714cc4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-12T22:09:17.067843Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	
	
	==> etcd [5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9] <==
	{"level":"warn","ts":"2024-09-12T22:05:55.229958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:05:47.761069Z","time spent":"7.468883413s","remote":"127.0.0.1:43656","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	2024/09/12 22:05:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-12T22:05:55.294506Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:05:55.294569Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T22:05:55.296297Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-12T22:05:55.296555Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296718Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296756Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296794Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296805Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296810Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296890Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296919Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296961Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.300526Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"warn","ts":"2024-09-12T22:05:55.300551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.787888735s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-12T22:05:55.300678Z","caller":"traceutil/trace.go:171","msg":"trace[1138780334] range","detail":"{range_begin:; range_end:; }","duration":"8.788034337s","start":"2024-09-12T22:05:46.512636Z","end":"2024-09-12T22:05:55.300670Z","steps":["trace[1138780334] 'agreement among raft nodes before linearized reading'  (duration: 8.787886758s)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:05:55.300635Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-12T22:05:55.300768Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-475401","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"error","ts":"2024-09-12T22:05:55.300728Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 22:10:03 up 14 min,  0 users,  load average: 0.32, 0.54, 0.34
	Linux ha-475401 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc] <==
	I0912 22:09:30.861355       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:09:40.864655       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:09:40.864698       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:09:40.864910       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:09:40.864942       1 main.go:299] handling current node
	I0912 22:09:40.864954       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:09:40.864971       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:09:40.865155       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:09:40.865175       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:09:50.868934       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:09:50.869043       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:09:50.869266       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:09:50.869311       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:09:50.869393       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:09:50.869414       1 main.go:299] handling current node
	I0912 22:09:50.869436       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:09:50.869461       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:10:00.860603       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:10:00.860679       1 main.go:299] handling current node
	I0912 22:10:00.860706       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:10:00.860712       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:10:00.860965       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:10:00.860984       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:10:00.861088       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:10:00.861167       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8] <==
	I0912 22:05:32.858202       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:32.858345       1 main.go:299] handling current node
	I0912 22:05:32.858376       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:32.858449       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:32.858648       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:32.862158       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:32.862305       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:32.862328       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:05:42.854289       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:42.854440       1 main.go:299] handling current node
	I0912 22:05:42.854469       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:42.854491       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:42.854639       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:42.854699       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:42.854833       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:42.854866       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	E0912 22:05:51.635725       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1900&timeout=5m4s&timeoutSeconds=304&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0912 22:05:52.853522       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:52.853616       1 main.go:299] handling current node
	I0912 22:05:52.853631       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:52.853637       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:52.853768       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:52.853791       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:52.853848       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:52.853853       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2] <==
	I0912 22:08:13.620015       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0912 22:08:13.638601       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:08:13.638637       1 policy_source.go:224] refreshing policies
	I0912 22:08:13.652499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:08:13.696839       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0912 22:08:13.697276       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0912 22:08:13.698031       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0912 22:08:13.698584       1 shared_informer.go:320] Caches are synced for configmaps
	I0912 22:08:13.698699       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0912 22:08:13.698717       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0912 22:08:13.699554       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:08:13.707867       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0912 22:08:13.714470       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.222]
	I0912 22:08:13.718262       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 22:08:13.721425       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 22:08:13.721684       1 aggregator.go:171] initial CRD sync complete...
	I0912 22:08:13.721749       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 22:08:13.721832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:08:13.721995       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:08:13.728592       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 22:08:13.728775       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0912 22:08:13.731701       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0912 22:08:14.603377       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0912 22:08:14.949235       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.203 192.168.39.222]
	W0912 22:08:24.950668       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203 192.168.39.222]
	
	
	==> kube-apiserver [21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5] <==
	I0912 22:07:29.808996       1 options.go:228] external host was not specified, using 192.168.39.203
	I0912 22:07:29.818782       1 server.go:142] Version: v1.31.1
	I0912 22:07:29.818823       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:07:31.155835       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0912 22:07:31.160280       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:07:31.172513       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0912 22:07:31.172553       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0912 22:07:31.173353       1 instance.go:232] Using reconciler: lease
	W0912 22:07:51.144688       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0912 22:07:51.144778       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0912 22:07:51.175072       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a] <==
	I0912 22:07:30.998628       1 serving.go:386] Generated self-signed cert in-memory
	I0912 22:07:31.529943       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0912 22:07:31.530032       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:07:31.541736       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0912 22:07:31.541984       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:07:31.542002       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:07:31.542025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0912 22:07:52.181911       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.203:8443/healthz\": dial tcp 192.168.39.203:8443: connect: connection refused"
	
	
	==> kube-controller-manager [3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931] <==
	I0912 22:08:43.541030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="636.526µs"
	I0912 22:08:51.740776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:08:51.741523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:08:51.768237       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:08:51.776453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:08:51.845289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.655448ms"
	I0912 22:08:51.845392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.755µs"
	I0912 22:08:51.894625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:08:56.993264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:08:58.843453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m02"
	I0912 22:09:01.962806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:05.709437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:09:05.726771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:09:06.736617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.548µs"
	I0912 22:09:06.879668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:09:07.073138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:25.360020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:25.450997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:29.215702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.909521ms"
	I0912 22:09:29.218340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.452µs"
	I0912 22:09:36.102238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m03"
	I0912 22:09:55.448072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-475401-m04"
	I0912 22:09:55.448584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:55.464524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:09:56.902965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	
	
	==> kube-proxy [0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49] <==
	E0912 22:04:37.013792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:40.083594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:40.083672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:40.083815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:40.083921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:43.156420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:43.156502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:46.228446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:46.228675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:46.227584       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:46.228810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:49.301362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:49.301596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:58.519291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:58.519446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:58.519612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:58.519667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:01.589439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:01.589513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:20.020625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:20.020699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:23.091906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:23.091990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:29.235948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:29.236398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:07:32.115746       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:35.187699       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:38.259559       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:44.406508       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:53.619715       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:08:15.127151       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0912 22:08:15.127284       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0912 22:08:15.127367       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:08:15.203246       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:08:15.203314       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:08:15.203348       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:08:15.213062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:08:15.214172       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:08:15.214207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:08:15.217247       1 config.go:199] "Starting service config controller"
	I0912 22:08:15.217353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:08:15.217455       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:08:15.217472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:08:15.221019       1 config.go:328] "Starting node config controller"
	I0912 22:08:15.222444       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:08:15.318446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:08:15.318456       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:08:15.324732       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818] <==
	E0912 21:59:45.491176       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21f2175a-f898-4059-ae91-9df7019f8cdb(kube-system/kube-proxy-fvw4x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.492064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.490969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	E0912 21:59:45.493554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d40bd7a6-62a0-4e2d-b6eb-2ec57e8eea0f(kube-system/kindnet-2bvcz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2bvcz"
	E0912 21:59:45.493577       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" pod="kube-system/kindnet-2bvcz"
	I0912 21:59:45.493620       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	I0912 21:59:45.493727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	E0912 22:05:32.870502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0912 22:05:32.870603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0912 22:05:39.944299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:42.283628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:43.008046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0912 22:05:43.186989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:43.500608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0912 22:05:44.018356       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0912 22:05:46.250331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0912 22:05:46.988757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0912 22:05:49.945850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0912 22:05:50.100150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:51.189304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	W0912 22:05:52.183460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:05:52.183632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0912 22:05:54.193231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0912 22:05:54.775785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0912 22:05:55.208908       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90] <==
	W0912 22:08:07.776261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.203:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:07.776361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.203:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:08.458312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.203:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:08.458372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.203:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:08.513542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.203:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:08.513625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.203:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:08.560736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.203:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:08.560852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.203:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.137533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.203:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.137604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.203:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.137612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.203:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.137644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.203:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.266726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.203:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.266793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.203:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.552325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.203:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.552369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.203:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:11.167472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.203:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:11.167602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.203:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:13.620823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 22:08:13.620920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:08:13.621073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 22:08:13.621905       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 22:08:13.632452       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:08:13.633205       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0912 22:08:38.787684       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 22:08:46 ha-475401 kubelet[1305]: E0912 22:08:46.701393    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178926700878462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:08:46 ha-475401 kubelet[1305]: E0912 22:08:46.701503    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178926700878462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:08:56 ha-475401 kubelet[1305]: E0912 22:08:56.703456    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178936702652943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:08:56 ha-475401 kubelet[1305]: E0912 22:08:56.704076    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178936702652943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:08:57 ha-475401 kubelet[1305]: I0912 22:08:57.482887    1305 scope.go:117] "RemoveContainer" containerID="bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d"
	Sep 12 22:09:06 ha-475401 kubelet[1305]: E0912 22:09:06.706285    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178946705813268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:06 ha-475401 kubelet[1305]: E0912 22:09:06.706547    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178946705813268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:08 ha-475401 kubelet[1305]: I0912 22:09:08.483469    1305 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-475401" podUID="775b4ded-905c-412e-9c92-5ce3ff148380"
	Sep 12 22:09:08 ha-475401 kubelet[1305]: I0912 22:09:08.502860    1305 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-475401"
	Sep 12 22:09:09 ha-475401 kubelet[1305]: I0912 22:09:09.321266    1305 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-475401" podUID="775b4ded-905c-412e-9c92-5ce3ff148380"
	Sep 12 22:09:16 ha-475401 kubelet[1305]: E0912 22:09:16.709804    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178956708685500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:16 ha-475401 kubelet[1305]: E0912 22:09:16.709843    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178956708685500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:26 ha-475401 kubelet[1305]: E0912 22:09:26.710973    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178966710672045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:26 ha-475401 kubelet[1305]: E0912 22:09:26.711025    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178966710672045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:36 ha-475401 kubelet[1305]: E0912 22:09:36.501599    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:09:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:09:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:09:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:09:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:09:36 ha-475401 kubelet[1305]: E0912 22:09:36.712395    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178976712061639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:36 ha-475401 kubelet[1305]: E0912 22:09:36.712417    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178976712061639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:46 ha-475401 kubelet[1305]: E0912 22:09:46.716792    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178986714338307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:46 ha-475401 kubelet[1305]: E0912 22:09:46.718348    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178986714338307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:56 ha-475401 kubelet[1305]: E0912 22:09:56.721594    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178996720901545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:09:56 ha-475401 kubelet[1305]: E0912 22:09:56.722033    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726178996720901545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:10:01.994876   33340 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19616-5891/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-475401 -n ha-475401
helpers_test.go:261: (dbg) Run:  kubectl --context ha-475401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 stop -v=7 --alsologtostderr
E0912 22:12:07.199704   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 stop -v=7 --alsologtostderr: exit status 82 (2m0.466850772s)

                                                
                                                
-- stdout --
	* Stopping node "ha-475401-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:10:21.421327   33747 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:10:21.421562   33747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:10:21.421570   33747 out.go:358] Setting ErrFile to fd 2...
	I0912 22:10:21.421574   33747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:10:21.421814   33747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:10:21.422031   33747 out.go:352] Setting JSON to false
	I0912 22:10:21.422099   33747 mustload.go:65] Loading cluster: ha-475401
	I0912 22:10:21.422435   33747 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:10:21.422511   33747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 22:10:21.422687   33747 mustload.go:65] Loading cluster: ha-475401
	I0912 22:10:21.422810   33747 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:10:21.422839   33747 stop.go:39] StopHost: ha-475401-m04
	I0912 22:10:21.423212   33747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:10:21.423255   33747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:10:21.438020   33747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0912 22:10:21.438501   33747 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:10:21.439057   33747 main.go:141] libmachine: Using API Version  1
	I0912 22:10:21.439078   33747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:10:21.439478   33747 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:10:21.442047   33747 out.go:177] * Stopping node "ha-475401-m04"  ...
	I0912 22:10:21.443176   33747 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:10:21.443212   33747 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:10:21.443466   33747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:10:21.443502   33747 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:10:21.446421   33747 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:10:21.446893   33747 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 23:09:50 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:10:21.446927   33747 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:10:21.447085   33747 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:10:21.447291   33747 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:10:21.447461   33747 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:10:21.447613   33747 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	I0912 22:10:21.535607   33747 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:10:21.588272   33747 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:10:21.642427   33747 main.go:141] libmachine: Stopping "ha-475401-m04"...
	I0912 22:10:21.642454   33747 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:10:21.644236   33747 main.go:141] libmachine: (ha-475401-m04) Calling .Stop
	I0912 22:10:21.647908   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 0/120
	I0912 22:10:22.649359   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 1/120
	I0912 22:10:23.650732   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 2/120
	I0912 22:10:24.652138   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 3/120
	I0912 22:10:25.653662   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 4/120
	I0912 22:10:26.655730   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 5/120
	I0912 22:10:27.657297   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 6/120
	I0912 22:10:28.659106   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 7/120
	I0912 22:10:29.661052   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 8/120
	I0912 22:10:30.662454   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 9/120
	I0912 22:10:31.664779   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 10/120
	I0912 22:10:32.666801   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 11/120
	I0912 22:10:33.668207   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 12/120
	I0912 22:10:34.669675   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 13/120
	I0912 22:10:35.670892   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 14/120
	I0912 22:10:36.672730   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 15/120
	I0912 22:10:37.674245   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 16/120
	I0912 22:10:38.675792   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 17/120
	I0912 22:10:39.677066   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 18/120
	I0912 22:10:40.678506   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 19/120
	I0912 22:10:41.680736   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 20/120
	I0912 22:10:42.682460   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 21/120
	I0912 22:10:43.684269   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 22/120
	I0912 22:10:44.686106   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 23/120
	I0912 22:10:45.688018   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 24/120
	I0912 22:10:46.690067   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 25/120
	I0912 22:10:47.691424   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 26/120
	I0912 22:10:48.692817   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 27/120
	I0912 22:10:49.694085   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 28/120
	I0912 22:10:50.695747   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 29/120
	I0912 22:10:51.697977   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 30/120
	I0912 22:10:52.699294   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 31/120
	I0912 22:10:53.700665   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 32/120
	I0912 22:10:54.701947   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 33/120
	I0912 22:10:55.704160   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 34/120
	I0912 22:10:56.705877   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 35/120
	I0912 22:10:57.707326   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 36/120
	I0912 22:10:58.708399   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 37/120
	I0912 22:10:59.709842   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 38/120
	I0912 22:11:00.711041   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 39/120
	I0912 22:11:01.713293   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 40/120
	I0912 22:11:02.714670   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 41/120
	I0912 22:11:03.715999   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 42/120
	I0912 22:11:04.717327   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 43/120
	I0912 22:11:05.718544   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 44/120
	I0912 22:11:06.720665   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 45/120
	I0912 22:11:07.722200   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 46/120
	I0912 22:11:08.723500   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 47/120
	I0912 22:11:09.725539   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 48/120
	I0912 22:11:10.727757   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 49/120
	I0912 22:11:11.729820   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 50/120
	I0912 22:11:12.731459   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 51/120
	I0912 22:11:13.732805   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 52/120
	I0912 22:11:14.734327   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 53/120
	I0912 22:11:15.736335   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 54/120
	I0912 22:11:16.738039   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 55/120
	I0912 22:11:17.739548   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 56/120
	I0912 22:11:18.740651   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 57/120
	I0912 22:11:19.742001   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 58/120
	I0912 22:11:20.743357   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 59/120
	I0912 22:11:21.744444   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 60/120
	I0912 22:11:22.745668   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 61/120
	I0912 22:11:23.746822   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 62/120
	I0912 22:11:24.748100   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 63/120
	I0912 22:11:25.749681   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 64/120
	I0912 22:11:26.751442   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 65/120
	I0912 22:11:27.753776   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 66/120
	I0912 22:11:28.755209   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 67/120
	I0912 22:11:29.756566   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 68/120
	I0912 22:11:30.757851   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 69/120
	I0912 22:11:31.759979   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 70/120
	I0912 22:11:32.761253   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 71/120
	I0912 22:11:33.762830   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 72/120
	I0912 22:11:34.764677   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 73/120
	I0912 22:11:35.766003   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 74/120
	I0912 22:11:36.767866   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 75/120
	I0912 22:11:37.769185   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 76/120
	I0912 22:11:38.770781   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 77/120
	I0912 22:11:39.772284   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 78/120
	I0912 22:11:40.773965   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 79/120
	I0912 22:11:41.776143   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 80/120
	I0912 22:11:42.777675   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 81/120
	I0912 22:11:43.779819   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 82/120
	I0912 22:11:44.781398   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 83/120
	I0912 22:11:45.782788   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 84/120
	I0912 22:11:46.784878   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 85/120
	I0912 22:11:47.786347   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 86/120
	I0912 22:11:48.787565   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 87/120
	I0912 22:11:49.789004   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 88/120
	I0912 22:11:50.790364   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 89/120
	I0912 22:11:51.792722   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 90/120
	I0912 22:11:52.794047   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 91/120
	I0912 22:11:53.795588   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 92/120
	I0912 22:11:54.797100   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 93/120
	I0912 22:11:55.799000   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 94/120
	I0912 22:11:56.801091   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 95/120
	I0912 22:11:57.802300   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 96/120
	I0912 22:11:58.804551   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 97/120
	I0912 22:11:59.806231   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 98/120
	I0912 22:12:00.808041   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 99/120
	I0912 22:12:01.810185   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 100/120
	I0912 22:12:02.812157   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 101/120
	I0912 22:12:03.813532   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 102/120
	I0912 22:12:04.814867   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 103/120
	I0912 22:12:05.816200   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 104/120
	I0912 22:12:06.817950   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 105/120
	I0912 22:12:07.820127   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 106/120
	I0912 22:12:08.821361   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 107/120
	I0912 22:12:09.822749   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 108/120
	I0912 22:12:10.824005   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 109/120
	I0912 22:12:11.825971   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 110/120
	I0912 22:12:12.827302   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 111/120
	I0912 22:12:13.828831   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 112/120
	I0912 22:12:14.830495   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 113/120
	I0912 22:12:15.831903   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 114/120
	I0912 22:12:16.833257   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 115/120
	I0912 22:12:17.834725   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 116/120
	I0912 22:12:18.836283   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 117/120
	I0912 22:12:19.837632   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 118/120
	I0912 22:12:20.839026   33747 main.go:141] libmachine: (ha-475401-m04) Waiting for machine to stop 119/120
	I0912 22:12:21.840168   33747 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:12:21.840215   33747 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0912 22:12:21.842038   33747 out.go:201] 
	W0912 22:12:21.843029   33747 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0912 22:12:21.843044   33747 out.go:270] * 
	* 
	W0912 22:12:21.845110   33747 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:12:21.846138   33747 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-475401 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr: exit status 3 (18.851019365s)

                                                
                                                
-- stdout --
	ha-475401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-475401-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:12:21.889755   34171 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:12:21.890037   34171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:21.890049   34171 out.go:358] Setting ErrFile to fd 2...
	I0912 22:12:21.890053   34171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:21.890219   34171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:12:21.890381   34171 out.go:352] Setting JSON to false
	I0912 22:12:21.890407   34171 mustload.go:65] Loading cluster: ha-475401
	I0912 22:12:21.890535   34171 notify.go:220] Checking for updates...
	I0912 22:12:21.890901   34171 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:12:21.890920   34171 status.go:255] checking status of ha-475401 ...
	I0912 22:12:21.891422   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:21.891480   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:21.916030   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0912 22:12:21.916538   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:21.917161   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:21.917186   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:21.917458   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:21.917672   34171 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:12:21.919326   34171 status.go:330] ha-475401 host status = "Running" (err=<nil>)
	I0912 22:12:21.919345   34171 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:12:21.919691   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:21.919724   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:21.934705   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0912 22:12:21.935129   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:21.935549   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:21.935570   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:21.935859   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:21.936022   34171 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:12:21.938491   34171 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:12:21.938954   34171 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:12:21.938980   34171 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:12:21.939159   34171 host.go:66] Checking if "ha-475401" exists ...
	I0912 22:12:21.939555   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:21.939604   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:21.954710   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43869
	I0912 22:12:21.955149   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:21.955845   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:21.955872   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:21.956226   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:21.956522   34171 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:12:21.956757   34171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:12:21.956782   34171 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:12:21.960383   34171 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:12:21.960890   34171 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:12:21.960930   34171 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:12:21.961097   34171 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:12:21.961299   34171 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:12:21.961557   34171 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:12:21.961766   34171 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:12:22.050540   34171 ssh_runner.go:195] Run: systemctl --version
	I0912 22:12:22.057260   34171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:12:22.072856   34171 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:12:22.072889   34171 api_server.go:166] Checking apiserver status ...
	I0912 22:12:22.072918   34171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:12:22.088816   34171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4837/cgroup
	W0912 22:12:22.098160   34171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4837/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:12:22.098203   34171 ssh_runner.go:195] Run: ls
	I0912 22:12:22.102276   34171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:12:22.108531   34171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:12:22.108557   34171 status.go:422] ha-475401 apiserver status = Running (err=<nil>)
	I0912 22:12:22.108568   34171 status.go:257] ha-475401 status: &{Name:ha-475401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:12:22.108602   34171 status.go:255] checking status of ha-475401-m02 ...
	I0912 22:12:22.108899   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.108940   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.123529   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0912 22:12:22.124034   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.124554   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.124588   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.124868   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.125072   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetState
	I0912 22:12:22.126641   34171 status.go:330] ha-475401-m02 host status = "Running" (err=<nil>)
	I0912 22:12:22.126656   34171 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:12:22.126945   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.126985   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.142228   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0912 22:12:22.142608   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.143060   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.143085   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.143421   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.143615   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetIP
	I0912 22:12:22.146600   34171 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:12:22.146979   34171 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 23:07:40 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:12:22.147007   34171 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:12:22.147122   34171 host.go:66] Checking if "ha-475401-m02" exists ...
	I0912 22:12:22.147515   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.147554   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.162648   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0912 22:12:22.163061   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.163485   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.163507   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.163874   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.164070   34171 main.go:141] libmachine: (ha-475401-m02) Calling .DriverName
	I0912 22:12:22.164278   34171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:12:22.164301   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHHostname
	I0912 22:12:22.167200   34171 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:12:22.167622   34171 main.go:141] libmachine: (ha-475401-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:31:3a", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 23:07:40 +0000 UTC Type:0 Mac:52:54:00:ad:31:3a Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-475401-m02 Clientid:01:52:54:00:ad:31:3a}
	I0912 22:12:22.167648   34171 main.go:141] libmachine: (ha-475401-m02) DBG | domain ha-475401-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:ad:31:3a in network mk-ha-475401
	I0912 22:12:22.167806   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHPort
	I0912 22:12:22.167993   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHKeyPath
	I0912 22:12:22.168133   34171 main.go:141] libmachine: (ha-475401-m02) Calling .GetSSHUsername
	I0912 22:12:22.168269   34171 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m02/id_rsa Username:docker}
	I0912 22:12:22.254293   34171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:12:22.272922   34171 kubeconfig.go:125] found "ha-475401" server: "https://192.168.39.254:8443"
	I0912 22:12:22.272952   34171 api_server.go:166] Checking apiserver status ...
	I0912 22:12:22.272991   34171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:12:22.287390   34171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0912 22:12:22.297527   34171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:12:22.297586   34171 ssh_runner.go:195] Run: ls
	I0912 22:12:22.301903   34171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0912 22:12:22.306291   34171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0912 22:12:22.306312   34171 status.go:422] ha-475401-m02 apiserver status = Running (err=<nil>)
	I0912 22:12:22.306320   34171 status.go:257] ha-475401-m02 status: &{Name:ha-475401-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:12:22.306335   34171 status.go:255] checking status of ha-475401-m04 ...
	I0912 22:12:22.306677   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.306710   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.322720   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0912 22:12:22.323206   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.323759   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.323780   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.324138   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.324350   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetState
	I0912 22:12:22.326074   34171 status.go:330] ha-475401-m04 host status = "Running" (err=<nil>)
	I0912 22:12:22.326095   34171 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:12:22.326501   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.326565   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.341103   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0912 22:12:22.341559   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.342070   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.342097   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.342357   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.342535   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetIP
	I0912 22:12:22.345445   34171 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:12:22.345892   34171 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 23:09:50 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:12:22.345917   34171 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:12:22.346047   34171 host.go:66] Checking if "ha-475401-m04" exists ...
	I0912 22:12:22.346408   34171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:12:22.346450   34171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:12:22.361440   34171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I0912 22:12:22.361860   34171 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:12:22.362329   34171 main.go:141] libmachine: Using API Version  1
	I0912 22:12:22.362349   34171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:12:22.362655   34171 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:12:22.362850   34171 main.go:141] libmachine: (ha-475401-m04) Calling .DriverName
	I0912 22:12:22.363048   34171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:12:22.363070   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHHostname
	I0912 22:12:22.365639   34171 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:12:22.366106   34171 main.go:141] libmachine: (ha-475401-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:b0:d3", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 23:09:50 +0000 UTC Type:0 Mac:52:54:00:cd:b0:d3 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-475401-m04 Clientid:01:52:54:00:cd:b0:d3}
	I0912 22:12:22.366134   34171 main.go:141] libmachine: (ha-475401-m04) DBG | domain ha-475401-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:cd:b0:d3 in network mk-ha-475401
	I0912 22:12:22.366285   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHPort
	I0912 22:12:22.366446   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHKeyPath
	I0912 22:12:22.366607   34171 main.go:141] libmachine: (ha-475401-m04) Calling .GetSSHUsername
	I0912 22:12:22.366744   34171 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401-m04/id_rsa Username:docker}
	W0912 22:12:40.697857   34171 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.76:22: connect: no route to host
	W0912 22:12:40.697943   34171 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0912 22:12:40.697957   34171 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	I0912 22:12:40.697966   34171 status.go:257] ha-475401-m04 status: &{Name:ha-475401-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0912 22:12:40.697983   34171 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-475401 -n ha-475401
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-475401 logs -n 25: (1.67516997s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m04 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp testdata/cp-test.txt                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401:/home/docker/cp-test_ha-475401-m04_ha-475401.txt                       |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401 sudo cat                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401.txt                                 |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m02:/home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m02 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m03:/home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n                                                                 | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | ha-475401-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-475401 ssh -n ha-475401-m03 sudo cat                                          | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC | 12 Sep 24 22:00 UTC |
	|         | /home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-475401 node stop m02 -v=7                                                     | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-475401 node start m02 -v=7                                                    | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-475401 -v=7                                                           | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-475401 -v=7                                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-475401 --wait=true -v=7                                                    | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:05 UTC | 12 Sep 24 22:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-475401                                                                | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:10 UTC |                     |
	| node    | ha-475401 node delete m03 -v=7                                                   | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:10 UTC | 12 Sep 24 22:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-475401 stop -v=7                                                              | ha-475401 | jenkins | v1.34.0 | 12 Sep 24 22:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:05:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:05:54.308256   31965 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:05:54.308402   31965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:54.308415   31965 out.go:358] Setting ErrFile to fd 2...
	I0912 22:05:54.308422   31965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:54.308856   31965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:05:54.309456   31965 out.go:352] Setting JSON to false
	I0912 22:05:54.310456   31965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2896,"bootTime":1726175858,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:05:54.310519   31965 start.go:139] virtualization: kvm guest
	I0912 22:05:54.312895   31965 out.go:177] * [ha-475401] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:05:54.314120   31965 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:05:54.314144   31965 notify.go:220] Checking for updates...
	I0912 22:05:54.316741   31965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:05:54.318263   31965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:05:54.319814   31965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:05:54.321183   31965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:05:54.322460   31965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:05:54.324240   31965 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:05:54.324330   31965 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:05:54.324718   31965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:05:54.324776   31965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:05:54.340147   31965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I0912 22:05:54.340668   31965 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:05:54.341186   31965 main.go:141] libmachine: Using API Version  1
	I0912 22:05:54.341205   31965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:05:54.341559   31965 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:05:54.341798   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.379340   31965 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:05:54.380592   31965 start.go:297] selected driver: kvm2
	I0912 22:05:54.380614   31965 start.go:901] validating driver "kvm2" against &{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:05:54.380762   31965 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:05:54.381236   31965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:05:54.381320   31965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:05:54.396424   31965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:05:54.397109   31965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:05:54.397199   31965 cni.go:84] Creating CNI manager for ""
	I0912 22:05:54.397214   31965 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 22:05:54.397304   31965 start.go:340] cluster config:
	{Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:05:54.397444   31965 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:05:54.400291   31965 out.go:177] * Starting "ha-475401" primary control-plane node in "ha-475401" cluster
	I0912 22:05:54.401651   31965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:05:54.401689   31965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:05:54.401698   31965 cache.go:56] Caching tarball of preloaded images
	I0912 22:05:54.401762   31965 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:05:54.401773   31965 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 22:05:54.401892   31965 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/config.json ...
	I0912 22:05:54.402082   31965 start.go:360] acquireMachinesLock for ha-475401: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:05:54.402123   31965 start.go:364] duration metric: took 23.908µs to acquireMachinesLock for "ha-475401"
	I0912 22:05:54.402136   31965 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:05:54.402142   31965 fix.go:54] fixHost starting: 
	I0912 22:05:54.402408   31965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:05:54.402435   31965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:05:54.416855   31965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0912 22:05:54.417279   31965 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:05:54.417760   31965 main.go:141] libmachine: Using API Version  1
	I0912 22:05:54.417796   31965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:05:54.418125   31965 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:05:54.418293   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.418467   31965 main.go:141] libmachine: (ha-475401) Calling .GetState
	I0912 22:05:54.420388   31965 fix.go:112] recreateIfNeeded on ha-475401: state=Running err=<nil>
	W0912 22:05:54.420414   31965 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 22:05:54.422325   31965 out.go:177] * Updating the running kvm2 "ha-475401" VM ...
	I0912 22:05:54.423590   31965 machine.go:93] provisionDockerMachine start ...
	I0912 22:05:54.423612   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:05:54.423841   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.426690   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.427140   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.427174   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.427293   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.427533   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.427702   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.427881   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.428114   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.428317   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.428327   31965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:05:54.551333   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 22:05:54.551374   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.551693   31965 buildroot.go:166] provisioning hostname "ha-475401"
	I0912 22:05:54.551715   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.551979   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.555806   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.556355   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.556383   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.556598   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.556825   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.556995   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.557167   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.557355   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.557515   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.557528   31965 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-475401 && echo "ha-475401" | sudo tee /etc/hostname
	I0912 22:05:54.685807   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-475401
	
	I0912 22:05:54.685833   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.688862   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.689230   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.689272   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.689458   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.689659   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.689821   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.689956   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.690172   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:54.690320   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:54.690337   31965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-475401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-475401/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-475401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:05:54.806548   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:05:54.806581   31965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:05:54.806614   31965 buildroot.go:174] setting up certificates
	I0912 22:05:54.806628   31965 provision.go:84] configureAuth start
	I0912 22:05:54.806642   31965 main.go:141] libmachine: (ha-475401) Calling .GetMachineName
	I0912 22:05:54.806925   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:05:54.809452   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.809877   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.809917   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.810060   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.812538   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.812946   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.812972   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.813110   31965 provision.go:143] copyHostCerts
	I0912 22:05:54.813152   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:05:54.813184   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:05:54.813195   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:05:54.813259   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:05:54.813335   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:05:54.813354   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:05:54.813359   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:05:54.813383   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:05:54.813422   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:05:54.813438   31965 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:05:54.813444   31965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:05:54.813463   31965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:05:54.813507   31965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.ha-475401 san=[127.0.0.1 192.168.39.203 ha-475401 localhost minikube]
	I0912 22:05:54.918391   31965 provision.go:177] copyRemoteCerts
	I0912 22:05:54.918443   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:05:54.918464   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:54.921345   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.921776   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:54.921807   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:54.921990   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:54.922164   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:54.922386   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:54.922559   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:05:55.011813   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:05:55.011876   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:05:55.037020   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:05:55.037089   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0912 22:05:55.063153   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:05:55.063233   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:05:55.090797   31965 provision.go:87] duration metric: took 284.151321ms to configureAuth
	I0912 22:05:55.090827   31965 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:05:55.091088   31965 config.go:182] Loaded profile config "ha-475401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:05:55.091170   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:05:55.093647   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:55.094052   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:05:55.094083   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:05:55.094307   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:05:55.094503   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:55.094690   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:05:55.094855   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:05:55.095009   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:05:55.095239   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:05:55.095256   31965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:07:26.037707   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:07:26.037733   31965 machine.go:96] duration metric: took 1m31.61412699s to provisionDockerMachine
	I0912 22:07:26.037743   31965 start.go:293] postStartSetup for "ha-475401" (driver="kvm2")
	I0912 22:07:26.037754   31965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:07:26.037767   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.038127   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:07:26.038151   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.041250   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.041769   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.041804   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.041979   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.042192   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.042428   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.042601   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.130491   31965 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:07:26.134534   31965 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:07:26.134555   31965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:07:26.134636   31965 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:07:26.134728   31965 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:07:26.134739   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 22:07:26.134858   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:07:26.144354   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:07:26.166753   31965 start.go:296] duration metric: took 128.997755ms for postStartSetup
	I0912 22:07:26.166795   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.167111   31965 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0912 22:07:26.167141   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.169926   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.170340   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.170369   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.170515   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.170720   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.170883   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.171029   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	W0912 22:07:26.257016   31965 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0912 22:07:26.257042   31965 fix.go:56] duration metric: took 1m31.854898899s for fixHost
	I0912 22:07:26.257067   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.259659   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.260072   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.260098   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.260257   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.260447   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.260691   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.260870   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.261020   31965 main.go:141] libmachine: Using SSH client type: native
	I0912 22:07:26.261241   31965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0912 22:07:26.261258   31965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:07:26.374318   31965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726178846.333397263
	
	I0912 22:07:26.374349   31965 fix.go:216] guest clock: 1726178846.333397263
	I0912 22:07:26.374360   31965 fix.go:229] Guest: 2024-09-12 22:07:26.333397263 +0000 UTC Remote: 2024-09-12 22:07:26.257051086 +0000 UTC m=+91.983184381 (delta=76.346177ms)
	I0912 22:07:26.374388   31965 fix.go:200] guest clock delta is within tolerance: 76.346177ms
	I0912 22:07:26.374405   31965 start.go:83] releasing machines lock for "ha-475401", held for 1m31.972271979s
	I0912 22:07:26.374432   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.374692   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:07:26.377314   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.377693   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.377722   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.377834   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378357   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378569   31965 main.go:141] libmachine: (ha-475401) Calling .DriverName
	I0912 22:07:26.378699   31965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:07:26.378736   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.378865   31965 ssh_runner.go:195] Run: cat /version.json
	I0912 22:07:26.378901   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHHostname
	I0912 22:07:26.381589   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.381654   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382033   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.382060   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382089   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:26.382103   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:26.382154   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.382336   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.382390   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHPort
	I0912 22:07:26.382475   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.382546   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHKeyPath
	I0912 22:07:26.382623   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.382675   31965 main.go:141] libmachine: (ha-475401) Calling .GetSSHUsername
	I0912 22:07:26.382854   31965 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/ha-475401/id_rsa Username:docker}
	I0912 22:07:26.495119   31965 ssh_runner.go:195] Run: systemctl --version
	I0912 22:07:26.501061   31965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:07:26.664433   31965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 22:07:26.669949   31965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:07:26.670015   31965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:07:26.679526   31965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:07:26.679555   31965 start.go:495] detecting cgroup driver to use...
	I0912 22:07:26.679622   31965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:07:26.698971   31965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:07:26.717299   31965 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:07:26.717369   31965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:07:26.732219   31965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:07:26.746688   31965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:07:26.919990   31965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:07:27.084592   31965 docker.go:233] disabling docker service ...
	I0912 22:07:27.084658   31965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:07:27.102083   31965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:07:27.115726   31965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:07:27.262053   31965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:07:27.406862   31965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:07:27.420194   31965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:07:27.438223   31965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 22:07:27.438289   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.449221   31965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:07:27.449305   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.459434   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.469427   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.479525   31965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:07:27.490732   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.501287   31965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.513138   31965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:07:27.523454   31965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:07:27.533717   31965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:07:27.543750   31965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:07:27.697172   31965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:07:27.923563   31965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:07:27.923650   31965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:07:27.929298   31965 start.go:563] Will wait 60s for crictl version
	I0912 22:07:27.929380   31965 ssh_runner.go:195] Run: which crictl
	I0912 22:07:27.933026   31965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:07:27.974820   31965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:07:27.974894   31965 ssh_runner.go:195] Run: crio --version
	I0912 22:07:28.004358   31965 ssh_runner.go:195] Run: crio --version
	I0912 22:07:28.036426   31965 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 22:07:28.038031   31965 main.go:141] libmachine: (ha-475401) Calling .GetIP
	I0912 22:07:28.041462   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:28.042024   31965 main.go:141] libmachine: (ha-475401) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:0e:dd", ip: ""} in network mk-ha-475401: {Iface:virbr1 ExpiryTime:2024-09-12 22:56:09 +0000 UTC Type:0 Mac:52:54:00:b0:0e:dd Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-475401 Clientid:01:52:54:00:b0:0e:dd}
	I0912 22:07:28.042055   31965 main.go:141] libmachine: (ha-475401) DBG | domain ha-475401 has defined IP address 192.168.39.203 and MAC address 52:54:00:b0:0e:dd in network mk-ha-475401
	I0912 22:07:28.042354   31965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:07:28.047267   31965 kubeadm.go:883] updating cluster {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:07:28.047416   31965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:07:28.047459   31965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:07:28.096154   31965 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:07:28.096177   31965 crio.go:433] Images already preloaded, skipping extraction
	I0912 22:07:28.096221   31965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:07:28.134173   31965 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:07:28.134194   31965 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:07:28.134203   31965 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.1 crio true true} ...
	I0912 22:07:28.134314   31965 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-475401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:07:28.134383   31965 ssh_runner.go:195] Run: crio config
	I0912 22:07:28.181792   31965 cni.go:84] Creating CNI manager for ""
	I0912 22:07:28.181819   31965 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0912 22:07:28.181830   31965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:07:28.181858   31965 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-475401 NodeName:ha-475401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:07:28.182005   31965 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-475401"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:07:28.182035   31965 kube-vip.go:115] generating kube-vip config ...
	I0912 22:07:28.182075   31965 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0912 22:07:28.193639   31965 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0912 22:07:28.193786   31965 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0912 22:07:28.193852   31965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:07:28.203866   31965 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:07:28.203941   31965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0912 22:07:28.214283   31965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0912 22:07:28.231262   31965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:07:28.248224   31965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0912 22:07:28.265836   31965 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0912 22:07:28.282487   31965 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0912 22:07:28.287274   31965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:07:28.436508   31965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:07:28.451645   31965 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401 for IP: 192.168.39.203
	I0912 22:07:28.451675   31965 certs.go:194] generating shared ca certs ...
	I0912 22:07:28.451696   31965 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.451860   31965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:07:28.451901   31965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:07:28.451909   31965 certs.go:256] generating profile certs ...
	I0912 22:07:28.451991   31965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/client.key
	I0912 22:07:28.452018   31965 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f
	I0912 22:07:28.452039   31965 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203 192.168.39.222 192.168.39.113 192.168.39.254]
	I0912 22:07:28.583568   31965 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f ...
	I0912 22:07:28.583606   31965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f: {Name:mkfee23c0cb253b22ce00c619242c3decf75e6d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.583870   31965 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f ...
	I0912 22:07:28.583895   31965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f: {Name:mka446876fe030cddcd2d9f5b61575e77d3b6f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:07:28.584005   31965 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt.9737f01f -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt
	I0912 22:07:28.584196   31965 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key.9737f01f -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key
	I0912 22:07:28.584452   31965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key
	I0912 22:07:28.584473   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:07:28.584489   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:07:28.584509   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:07:28.584524   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:07:28.584542   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 22:07:28.584560   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 22:07:28.584594   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 22:07:28.584614   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 22:07:28.584678   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:07:28.584721   31965 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:07:28.584735   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:07:28.584765   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:07:28.584798   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:07:28.584832   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:07:28.584886   31965 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:07:28.584926   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:28.584973   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 22:07:28.584991   31965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.585563   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:07:28.611042   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:07:28.635208   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:07:28.659588   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:07:28.683242   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 22:07:28.706777   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:07:28.729689   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:07:28.751938   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/ha-475401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:07:28.775841   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:07:28.799649   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:07:28.822651   31965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:07:28.851268   31965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:07:28.933439   31965 ssh_runner.go:195] Run: openssl version
	I0912 22:07:28.962187   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:07:28.975319   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.998140   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:07:28.998208   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:07:29.020666   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:07:29.049442   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:07:29.073325   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.116004   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.116064   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:07:29.187294   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:07:29.245295   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:07:29.337834   31965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.350119   31965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.350177   31965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:07:29.423318   31965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:07:29.452139   31965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:07:29.472664   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:07:29.496224   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:07:29.524629   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:07:29.548007   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:07:29.563185   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:07:29.613805   31965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:07:29.647961   31965 kubeadm.go:392] StartCluster: {Name:ha-475401 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-475401 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.222 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:07:29.648066   31965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:07:29.648129   31965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:07:29.829809   31965 cri.go:89] found id: "1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6"
	I0912 22:07:29.829839   31965 cri.go:89] found id: "1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a"
	I0912 22:07:29.829846   31965 cri.go:89] found id: "21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5"
	I0912 22:07:29.829851   31965 cri.go:89] found id: "e550a104b2f9042382f9e65726926c623fb8e868e373108175fc495c9dd64c8f"
	I0912 22:07:29.829855   31965 cri.go:89] found id: "b433fe13a2ac8127e75624cac8d8e0fcbfbca2ad39df047d1a05ed9ce6172dea"
	I0912 22:07:29.829860   31965 cri.go:89] found id: "9fbb04fa01cedb3e1e9ca48c8a9b7758dc67279fea5288ee919c6e0e30a20caa"
	I0912 22:07:29.829864   31965 cri.go:89] found id: "9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4"
	I0912 22:07:29.829869   31965 cri.go:89] found id: "f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73"
	I0912 22:07:29.829873   31965 cri.go:89] found id: "38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8"
	I0912 22:07:29.829882   31965 cri.go:89] found id: "0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49"
	I0912 22:07:29.829886   31965 cri.go:89] found id: "4cfa11556cf34ac2b5bb874421c929c31a0f68b70515fa122f1c3acc67b601f4"
	I0912 22:07:29.829891   31965 cri.go:89] found id: "17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818"
	I0912 22:07:29.829898   31965 cri.go:89] found id: "5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9"
	I0912 22:07:29.829902   31965 cri.go:89] found id: ""
	I0912 22:07:29.829953   31965 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.347652034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01390872-37f4-43c2-aef3-f99b7efc1745 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.348515517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cd9b0c4-e208-41e0-a705-88dfb1ac9a59 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.348930160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179161348905404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cd9b0c4-e208-41e0-a705-88dfb1ac9a59 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.349575253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc7c418-d1c7-4d98-bd6d-35c6d512037f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.349637958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc7c418-d1c7-4d98-bd6d-35c6d512037f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.350047701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcc7c418-d1c7-4d98-bd6d-35c6d512037f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.393995486Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3682d05b-3c19-43ce-b433-efb1669fa60f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.394586997Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178882637604766,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-475401,Uid:8f4f5605b5feab014ea95bd7273dc6e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726178864411962431,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4f5605b5feab014ea95bd7273dc6e8,kubernetes.io/config.seen: 2024-09-12T22:07:28.243556914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849004710236,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178849001156880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848985645365,Labels:map[string]string{co
ntroller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848959348348,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727dab4c45bcae218296d690a83a,kubernetes.io/config
.seen: 2024-09-12T21:56:36.456630592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-475401,Uid:980ac58ccfb719847553bfe344364a50,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848952124878,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 980ac58ccfb719847553bfe344364a50,kubernetes.io/config.seen: 2024-09-12T21:56:36.456637908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7fc8738b-56e8-4024-afe7-b552c79dd3f2,Namespace:kube-
system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848937937177,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hos
tPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T21:56:52.968730435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848933011807,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:36.456635522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b06
7fd,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848915259604,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-475401,Uid:6a77994c747e48492b9028f572619aa8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726178848905487532,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 6a77994c747e48492b9028f572619aa8,kubernetes.io/config.seen: 2024-09-12T21:56:36.456636946Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-l2hdm,Uid:8ab651ae-e8a0-438a-8bf6-4462c8304466,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178349973174937,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:59:09.652945962Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhdj7,Uid:d964d6f0-d544-4cef-8151-08e5e1c76dce,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178214172601414,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.965572808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-pzsv8,Uid:7acde6a5-dc08-4dda-89ef-07ed97df387e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178214165828617,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:52.959466832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&PodSandboxMetadata{Name:kindnet-cbfm5,Uid:e0f3daaf-250f-4614-bd8d-61e8fe544c1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178201506933282,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.193359736Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&PodSandboxMetadata{Name:kube-proxy-4bk97,Uid:a2af5486-4276-48a8-98ef-6fad7ae9976d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178201480986781,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T21:56:41.169316322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&PodSandboxMetadata{Name:etcd-ha-475401,Uid:456eb783a38fcb8ea8f7852ac4b9e481,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178190103684920,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: 456eb783a38fcb8ea8f7852ac4b9e481,kubernetes.io/config.seen: 2024-09-12T21:56:29.620494346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-475401,Uid:dc71727dab4c45bcae218296d690a83a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726178190085057134,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc71727d
ab4c45bcae218296d690a83a,kubernetes.io/config.seen: 2024-09-12T21:56:29.620491290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3682d05b-3c19-43ce-b433-efb1669fa60f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.395772813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2867ccc7-9b33-4cbc-9f79-0296399f4e95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.395872428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2867ccc7-9b33-4cbc-9f79-0296399f4e95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.396678438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2867ccc7-9b33-4cbc-9f79-0296399f4e95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.405519162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff5d292b-8419-4ca4-9be1-a5bb049b3348 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.405590253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff5d292b-8419-4ca4-9be1-a5bb049b3348 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.406933352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9288dcf7-d3db-4bbb-a4f6-7a42c5f2c32e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.407508799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179161407482358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9288dcf7-d3db-4bbb-a4f6-7a42c5f2c32e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.408347578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a38fb609-7fd8-4086-98da-44f177a9d7a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.408440784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a38fb609-7fd8-4086-98da-44f177a9d7a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.409012933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a38fb609-7fd8-4086-98da-44f177a9d7a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.457493098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f6dc0b2-6506-40f9-b66b-b32320cc8912 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.457598537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f6dc0b2-6506-40f9-b66b-b32320cc8912 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.459300920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a8309c2-d1bb-4f08-8f33-f7a7f6df5fe0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.459959630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179161459926036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8309c2-d1bb-4f08-8f33-f7a7f6df5fe0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.460834073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46c10b24-e60d-4552-9912-1914116111e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.460922220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46c10b24-e60d-4552-9912-1914116111e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:12:41 ha-475401 crio[3513]: time="2024-09-12 22:12:41.461544743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:812fab18c031f5fd8bfff0e990196ca5989d44088cb0dc5fd93fd55d96ff4c10,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726178937501742199,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726178891498498084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726178890496072105,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb719847553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3ce74e5d17725d1fe954be15215e92128befc599aa560249ef5604ad1e1e6d,PodSandboxId:64ef09d970faafb0fb8bd1bcc9fb7ca7302e38f081079367950b4ea916860374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726178887495846357,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc8738b-56e8-4024-afe7-b552c79dd3f2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95577788efd07f326614fb95b9b7ec85d31ce5ca57f5e6bed5a7620d809b53ac,PodSandboxId:3e1c4cf8137507387adc44436c321d1a886ee56c42008ad1118c5bce2c7269a6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726178882764623751,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693190f0090a91e2f1c8840523479c5ced8b6eb074af4c4251f6911304dbb2f2,PodSandboxId:4f98b6471e3d1e699ae242d853647300a4e4965bc4e74fcd3cbf108c5bc62b2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726178864507083808,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4f5605b5feab014ea95bd7273dc6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8,PodSandboxId:b4dbe4dcc4ddd72d8a798e51f1840b5b52cc4267a4a06dab9633aa48dd0f34db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726178850012772254,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc,PodSandboxId:e203b47f2bd01c8567213f5887a3345a9d4119656c21c922bd77571238b067fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726178849651349978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1b73b70
e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6,PodSandboxId:a2330c1240fe2de56fdec028a88591810ff0d16796a2c481def0dfafda641c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849744320251,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90,PodSandboxId:559d32bfb49241aaa1d53ef26bacdf7fb8a88309a2a77189b7574e4386e80d4a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726178849515534447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6,PodSandboxId:1b8277469e46c93b88795c5a6db967f6f4905d117c68ad427ef23be9455495b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726178849531177531,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6,PodSandboxId:76d52315f9785b5837eb372811a72cbe1d516b88bcfb5535af70373a67da5259,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726178849541645615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"proto
col\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a,PodSandboxId:c0d16f3576d89f2f7e2e22ac28226075073d90c1e1b35117d163b8eab313a6cd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726178849360322111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980ac58ccfb71984
7553bfe344364a50,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5,PodSandboxId:76c52cdf935b79bc4bf745b515ef78123f172f23b295560e637a619384c7f433,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726178849284416025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a77994c747e48492b9028f572619aa8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e14e475ce353a0c9320c836a95978697f03e1195ee9311626f95f6748ce11,PodSandboxId:7fe4fd6a828e2ed0ea467efedd36329caff9bec0107156b6b5ad3e033d3d6ee2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726178353036014485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-l2hdm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8ab651ae-e8a0-438a-8bf6-4462c8304466,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4,PodSandboxId:8b265e5bc94933908af2b3710bd8e4b4b8b5b8b26929977b5d1c91118fb80c39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214407294575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xhdj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d964d6f0-d544-4cef-8151-08e5e1c76dce,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73,PodSandboxId:2fdeb0043962218a23323f08bd2bce3402618bc908240f83e1f614c312ae6edd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726178214365773691,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pzsv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7acde6a5-dc08-4dda-89ef-07ed97df387e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8,PodSandboxId:ef4f45d37668b0d37bad9a63974b5000a180e5d1f5e3234d34691005d5d78c8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726178201877273546,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cbfm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f3daaf-250f-4614-bd8d-61e8fe544c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49,PodSandboxId:d58e93f3f447d46fb0688a7d4ee4eb52c19c0b36bde29b81c50d0a1c5e3d700b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726178201594672960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4bk97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2af5486-4276-48a8-98ef-6fad7ae9976d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9,PodSandboxId:e980e3980d971549e1c17972cb82f745cca7c01aad06c39efaf3dfb9b5ec0cd9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726178190273844319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb783a38fcb8ea8f7852ac4b9e481,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818,PodSandboxId:17b7717a92942308ddac497161435755ad7b877133e7375a315c4f572e019c47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726178190295546985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-475401,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71727dab4c45bcae218296d690a83a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46c10b24-e60d-4552-9912-1914116111e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	812fab18c031f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   64ef09d970faa       storage-provisioner
	08d058679eafb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   76c52cdf935b7       kube-apiserver-ha-475401
	3756c86b696c4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   c0d16f3576d89       kube-controller-manager-ha-475401
	bc3ce74e5d177       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   64ef09d970faa       storage-provisioner
	95577788efd07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   3e1c4cf813750       busybox-7dff88458-l2hdm
	693190f0090a9       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   4f98b6471e3d1       kube-vip-ha-475401
	3ef34e41bb3dd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   b4dbe4dcc4ddd       kube-proxy-4bk97
	d1b73b70e8ff1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   a2330c1240fe2       coredns-7c65d6cfc9-pzsv8
	28ed212daea64       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   e203b47f2bd01       kindnet-cbfm5
	1fb5957e8f938       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   76d52315f9785       coredns-7c65d6cfc9-xhdj7
	21aea3da36602       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   1b8277469e46c       etcd-ha-475401
	7bd2f2d4b23f5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   559d32bfb4924       kube-scheduler-ha-475401
	1d31b278af3ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   c0d16f3576d89       kube-controller-manager-ha-475401
	21b27af5812da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   76c52cdf935b7       kube-apiserver-ha-475401
	607e14e475ce3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   7fe4fd6a828e2       busybox-7dff88458-l2hdm
	9b36db608ba8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   8b265e5bc9493       coredns-7c65d6cfc9-xhdj7
	f56ac218b5509       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   2fdeb00439622       coredns-7c65d6cfc9-pzsv8
	38d31aa5dc410       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   ef4f45d37668b       kindnet-cbfm5
	0891cec467fda       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   d58e93f3f447d       kube-proxy-4bk97
	17a4293d12cac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   17b7717a92942       kube-scheduler-ha-475401
	5008665ceb8c0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   e980e3980d971       etcd-ha-475401
	
	
	==> coredns [1fb5957e8f938cf51ff0e9ac1f2e0af610583e907bc7937da1bb19c7af3ef6c6] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[843388788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:07:38.628) (total time: 10002ms):
	Trace[843388788]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:07:48.630)
	Trace[843388788]: [10.00200247s] [10.00200247s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57008->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57008->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9b36db608ba8cd77ee7893c00e7e8801981eb2c1fa6b48980fbc8a3dea7306e4] <==
	[INFO] 10.244.0.4:58355 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670657s
	[INFO] 10.244.0.4:38422 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110468s
	[INFO] 10.244.1.2:46631 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172109s
	[INFO] 10.244.1.2:34300 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148188s
	[INFO] 10.244.1.2:48603 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001490904s
	[INFO] 10.244.1.2:53797 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095174s
	[INFO] 10.244.3.2:58169 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000290075s
	[INFO] 10.244.3.2:32925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114361s
	[INFO] 10.244.0.4:36730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135132s
	[INFO] 10.244.0.4:34478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076546s
	[INFO] 10.244.1.2:55703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157241s
	[INFO] 10.244.1.2:60121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228732s
	[INFO] 10.244.1.2:38242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131949s
	[INFO] 10.244.3.2:38185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132157s
	[INFO] 10.244.3.2:36830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000264113s
	[INFO] 10.244.3.2:49645 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196302s
	[INFO] 10.244.0.4:60935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119291s
	[INFO] 10.244.1.2:60943 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082071s
	[INFO] 10.244.1.2:49207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009839s
	[INFO] 10.244.1.2:41020 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060198s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d1b73b70e8ff1b2d7f764c620ab2fee3d9de8b480a11b91bebfaca8b3b54b9c6] <==
	Trace[450409556]: [10.000924858s] [10.000924858s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[806161496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:07:41.266) (total time: 10913ms):
	Trace[806161496]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer 10913ms (22:07:52.180)
	Trace[806161496]: [10.913846394s] [10.913846394s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39152->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41700->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41700->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f56ac218b5509f77f667fc3bdb07a21ae743c376589c8833f500d1addfc99f73] <==
	[INFO] 10.244.3.2:57228 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000229422s
	[INFO] 10.244.0.4:42574 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013812s
	[INFO] 10.244.0.4:39901 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001988121s
	[INFO] 10.244.0.4:50914 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026063s
	[INFO] 10.244.0.4:38018 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084673s
	[INFO] 10.244.0.4:49421 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097844s
	[INFO] 10.244.1.2:35174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112144s
	[INFO] 10.244.1.2:45641 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742655s
	[INFO] 10.244.1.2:42943 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126184s
	[INFO] 10.244.1.2:48539 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090774s
	[INFO] 10.244.3.2:42645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115681s
	[INFO] 10.244.3.2:42854 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129882s
	[INFO] 10.244.0.4:47863 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135193s
	[INFO] 10.244.0.4:54893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107279s
	[INFO] 10.244.1.2:50095 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200409s
	[INFO] 10.244.3.2:36127 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178104s
	[INFO] 10.244.0.4:56439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119423s
	[INFO] 10.244.0.4:57332 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122479s
	[INFO] 10.244.0.4:54257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113812s
	[INFO] 10.244.1.2:47781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122756s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-475401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_56_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:12:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:11:19 +0000   Thu, 12 Sep 2024 22:11:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:11:19 +0000   Thu, 12 Sep 2024 22:11:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:11:19 +0000   Thu, 12 Sep 2024 22:11:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:11:19 +0000   Thu, 12 Sep 2024 22:11:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-475401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f28b923154b09a761fb2715e95e75
	  System UUID:                a21f28b9-2315-4b09-a761-fb2715e95e75
	  Boot ID:                    719d19bb-1949-4b62-be49-e032ba422c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l2hdm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-pzsv8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-xhdj7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-475401                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-cbfm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-475401             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-475401    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4bk97                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-475401             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-475401                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)      kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)      kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x4 over 16m)      kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Warning  ContainerGCFailed        6m5s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m30s (x3 over 6m19s)  kubelet          Node ha-475401 status is now: NodeNotReady
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-475401 event: Registered Node ha-475401 in Controller
	  Normal   NodeNotReady             99s                    node-controller  Node ha-475401 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     82s (x2 over 16m)      kubelet          Node ha-475401 status is now: NodeHasSufficientPID
	  Normal   NodeReady                82s (x2 over 15m)      kubelet          Node ha-475401 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    82s (x2 over 16m)      kubelet          Node ha-475401 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  82s (x2 over 16m)      kubelet          Node ha-475401 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-475401-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:57:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:12:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:08:58 +0000   Thu, 12 Sep 2024 22:08:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-475401-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e177a4c02d5494a80aacc759f5d8434
	  System UUID:                5e177a4c-02d5-494a-80aa-cc759f5d8434
	  Boot ID:                    dd9168b6-4831-47ab-97f7-c3a88c9853cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t7gjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-475401-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-k4q6l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-475401-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-475401-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-68h98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-475401-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-475401-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-475401-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-475401-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node ha-475401-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node ha-475401-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-475401-m02 event: Registered Node ha-475401-m02 in Controller
	
	
	Name:               ha-475401-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-475401-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=ha-475401
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T21_59_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:59:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-475401-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:10:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:10:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:10:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:10:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Sep 2024 22:09:55 +0000   Thu, 12 Sep 2024 22:10:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-475401-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9864edb6a0d14b6abd1a66cf5ac88479
	  System UUID:                9864edb6-a0d1-4b6a-bd1a-66cf5ac88479
	  Boot ID:                    c747d5f3-f470-48b0-981b-da0fd4da75a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s6sjm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-2bvcz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-bmv9m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-475401-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-475401-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   NodeNotReady             3m50s                  node-controller  Node ha-475401-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-475401-m04 event: Registered Node ha-475401-m04 in Controller
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m46s (x2 over 2m46s)  kubelet          Node ha-475401-m04 has been rebooted, boot id: c747d5f3-f470-48b0-981b-da0fd4da75a4
	  Normal   NodeHasSufficientMemory  2m46s (x3 over 2m46s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x3 over 2m46s)  kubelet          Node ha-475401-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x3 over 2m46s)  kubelet          Node ha-475401-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m46s                  kubelet          Node ha-475401-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m46s                  kubelet          Node ha-475401-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-475401-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.020585] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063471] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182960] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.109592] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.292147] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.769780] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +5.095538] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.058539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.038747] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.092804] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.235155] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.799100] kauditd_printk_skb: 38 callbacks suppressed
	[Sep12 21:57] kauditd_printk_skb: 28 callbacks suppressed
	[Sep12 22:07] systemd-fstab-generator[3437]: Ignoring "noauto" option for root device
	[  +0.178924] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.180969] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[  +0.150971] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.276098] systemd-fstab-generator[3503]: Ignoring "noauto" option for root device
	[  +0.745905] systemd-fstab-generator[3601]: Ignoring "noauto" option for root device
	[ +13.797154] kauditd_printk_skb: 217 callbacks suppressed
	[ +10.069875] kauditd_printk_skb: 1 callbacks suppressed
	[Sep12 22:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.463103] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [21aea3da36602ff092d755b6057bc2857297c1c0a798e3e6ab1803c6d0a5eaa6] <==
	{"level":"info","ts":"2024-09-12T22:09:17.049545Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.049669Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.049741Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.056446Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"28dd8e6bbca035f5","to":"344afae425714cc4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-12T22:09:17.056613Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:09:17.067734Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"28dd8e6bbca035f5","to":"344afae425714cc4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-12T22:09:17.067843Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.318491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461 11426474734040445405)"}
	{"level":"info","ts":"2024-09-12T22:10:08.320806Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","removed-remote-peer-id":"344afae425714cc4","removed-remote-peer-urls":["https://192.168.39.113:2380"]}
	{"level":"info","ts":"2024-09-12T22:10:08.320954Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.321021Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"28dd8e6bbca035f5","removed-member-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.321195Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-12T22:10:08.321931Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.322401Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.322889Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.322946Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.323079Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.323427Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4","error":"context canceled"}
	{"level":"warn","ts":"2024-09-12T22:10:08.323502Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"344afae425714cc4","error":"failed to read 344afae425714cc4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-12T22:10:08.323556Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.323812Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4","error":"context canceled"}
	{"level":"info","ts":"2024-09-12T22:10:08.323897Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.324022Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:10:08.324078Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"28dd8e6bbca035f5","removed-remote-peer-id":"344afae425714cc4"}
	{"level":"warn","ts":"2024-09-12T22:10:08.340457Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"28dd8e6bbca035f5","remote-peer-id-stream-handler":"28dd8e6bbca035f5","remote-peer-id-from":"344afae425714cc4"}
	
	
	==> etcd [5008665ceb8c09f53ef64d7621c9910a82d94cc7e8fb4c534ff1065d8b9dc1a9] <==
	{"level":"warn","ts":"2024-09-12T22:05:55.229958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:05:47.761069Z","time spent":"7.468883413s","remote":"127.0.0.1:43656","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	2024/09/12 22:05:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-12T22:05:55.294506Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:05:55.294569Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T22:05:55.296297Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-12T22:05:55.296555Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296718Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296756Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296794Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296805Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"344afae425714cc4"}
	{"level":"info","ts":"2024-09-12T22:05:55.296810Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296890Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296919Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"28dd8e6bbca035f5","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.296961Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9e92fe3b0574f1dd"}
	{"level":"info","ts":"2024-09-12T22:05:55.300526Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"warn","ts":"2024-09-12T22:05:55.300551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.787888735s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-12T22:05:55.300678Z","caller":"traceutil/trace.go:171","msg":"trace[1138780334] range","detail":"{range_begin:; range_end:; }","duration":"8.788034337s","start":"2024-09-12T22:05:46.512636Z","end":"2024-09-12T22:05:55.300670Z","steps":["trace[1138780334] 'agreement among raft nodes before linearized reading'  (duration: 8.787886758s)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:05:55.300635Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-09-12T22:05:55.300768Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-475401","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"error","ts":"2024-09-12T22:05:55.300728Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 22:12:42 up 16 min,  0 users,  load average: 0.14, 0.37, 0.30
	Linux ha-475401 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [28ed212daea64133855a7ab08f6d9fe403a58159f6a366a28ce1892a91bb17fc] <==
	I0912 22:12:00.867985       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:12:10.868755       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:12:10.868935       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:12:10.869220       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:12:10.869273       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:12:10.869410       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:12:10.869445       1 main.go:299] handling current node
	I0912 22:12:20.867964       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:12:20.868197       1 main.go:299] handling current node
	I0912 22:12:20.868289       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:12:20.869211       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:12:20.869417       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:12:20.869440       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:12:30.861306       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:12:30.861509       1 main.go:299] handling current node
	I0912 22:12:30.861562       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:12:30.861581       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:12:30.861759       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:12:30.861780       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:12:40.870271       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:12:40.870380       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:12:40.870599       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:12:40.870606       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:12:40.870675       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:12:40.870681       1 main.go:299] handling current node
	
	
	==> kindnet [38d31aa5dc4105508066466c3ec1760275d6df1b5a41215ea8624bdecb7f44e8] <==
	I0912 22:05:32.858202       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:32.858345       1 main.go:299] handling current node
	I0912 22:05:32.858376       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:32.858449       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:32.858648       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:32.862158       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:32.862305       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:32.862328       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	I0912 22:05:42.854289       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:42.854440       1 main.go:299] handling current node
	I0912 22:05:42.854469       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:42.854491       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:42.854639       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:42.854699       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:42.854833       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:42.854866       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	E0912 22:05:51.635725       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1900&timeout=5m4s&timeoutSeconds=304&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0912 22:05:52.853522       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0912 22:05:52.853616       1 main.go:299] handling current node
	I0912 22:05:52.853631       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0912 22:05:52.853637       1 main.go:322] Node ha-475401-m02 has CIDR [10.244.1.0/24] 
	I0912 22:05:52.853768       1 main.go:295] Handling node with IPs: map[192.168.39.113:{}]
	I0912 22:05:52.853791       1 main.go:322] Node ha-475401-m03 has CIDR [10.244.3.0/24] 
	I0912 22:05:52.853848       1 main.go:295] Handling node with IPs: map[192.168.39.76:{}]
	I0912 22:05:52.853853       1 main.go:322] Node ha-475401-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [08d058679eafb2dbca1bc2dfb3dfe0fe416163dba6d00f6ec942f2a53bc02ae2] <==
	I0912 22:08:13.620015       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0912 22:08:13.638601       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:08:13.638637       1 policy_source.go:224] refreshing policies
	I0912 22:08:13.652499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:08:13.696839       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0912 22:08:13.697276       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0912 22:08:13.698031       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0912 22:08:13.698584       1 shared_informer.go:320] Caches are synced for configmaps
	I0912 22:08:13.698699       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0912 22:08:13.698717       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0912 22:08:13.699554       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:08:13.707867       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0912 22:08:13.714470       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.222]
	I0912 22:08:13.718262       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 22:08:13.721425       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 22:08:13.721684       1 aggregator.go:171] initial CRD sync complete...
	I0912 22:08:13.721749       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 22:08:13.721832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:08:13.721995       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:08:13.728592       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 22:08:13.728775       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0912 22:08:13.731701       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0912 22:08:14.603377       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0912 22:08:14.949235       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.113 192.168.39.203 192.168.39.222]
	W0912 22:08:24.950668       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.203 192.168.39.222]
	
	
	==> kube-apiserver [21b27af5812da51165304d6948b93ce25cffa267f34847a15febc75cb59f84b5] <==
	I0912 22:07:29.808996       1 options.go:228] external host was not specified, using 192.168.39.203
	I0912 22:07:29.818782       1 server.go:142] Version: v1.31.1
	I0912 22:07:29.818823       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:07:31.155835       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0912 22:07:31.160280       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:07:31.172513       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0912 22:07:31.172553       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0912 22:07:31.173353       1 instance.go:232] Using reconciler: lease
	W0912 22:07:51.144688       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0912 22:07:51.144778       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0912 22:07:51.175072       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1d31b278af3adedc4eaca27db99510c99bdd7dcc10da7656a3b85767b493ae3a] <==
	I0912 22:07:30.998628       1 serving.go:386] Generated self-signed cert in-memory
	I0912 22:07:31.529943       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0912 22:07:31.530032       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:07:31.541736       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0912 22:07:31.541984       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:07:31.542002       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:07:31.542025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0912 22:07:52.181911       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.203:8443/healthz\": dial tcp 192.168.39.203:8443: connect: connection refused"
	
	
	==> kube-controller-manager [3756c86b696c4e8fd3e7463b7270af1f104f371066ce814e4ff7c11fa40d2931] <==
	I0912 22:11:02.017991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:02.040724       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:02.124788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401-m04"
	I0912 22:11:02.179607       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.623184ms"
	I0912 22:11:02.180036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="192.769µs"
	I0912 22:11:02.224629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.098242ms"
	I0912 22:11:02.251285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.561506ms"
	I0912 22:11:02.252485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.741µs"
	I0912 22:11:02.296872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="14.207413ms"
	I0912 22:11:02.297739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="150.521µs"
	I0912 22:11:07.169463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:12.195419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:16.584670       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wddfb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wddfb\": the object has been modified; please apply your changes to the latest version and try again"
	I0912 22:11:16.585038       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7e5174b7-58f3-4ecd-a718-b4ec7c46855b", APIVersion:"v1", ResourceVersion:"264", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wddfb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wddfb": the object has been modified; please apply your changes to the latest version and try again
	I0912 22:11:16.626180       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wddfb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wddfb\": the object has been modified; please apply your changes to the latest version and try again"
	I0912 22:11:16.629053       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7e5174b7-58f3-4ecd-a718-b4ec7c46855b", APIVersion:"v1", ResourceVersion:"264", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wddfb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wddfb": the object has been modified; please apply your changes to the latest version and try again
	I0912 22:11:16.646711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="85.358022ms"
	I0912 22:11:16.646828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.827µs"
	I0912 22:11:16.690955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.554973ms"
	I0912 22:11:16.691215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="122.953µs"
	I0912 22:11:16.805486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.959191ms"
	I0912 22:11:16.805572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.309µs"
	I0912 22:11:19.642301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:19.665662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	I0912 22:11:22.075692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-475401"
	
	
	==> kube-proxy [0891cec467fda03cc10ec8bf4db216ce7cae379bd093917e008b90cc96d90c49] <==
	E0912 22:04:37.013792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:40.083594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:40.083672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:40.083815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:40.083921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:43.156420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:43.156502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:46.228446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:46.228675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:46.227584       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:46.228810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:49.301362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:49.301596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:58.519291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:58.519446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:04:58.519612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:04:58.519667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:01.589439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:01.589513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:20.020625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:20.020699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1885\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:23.091906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:23.091990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-475401&resourceVersion=1816\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0912 22:05:29.235948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870": dial tcp 192.168.39.254:8443: connect: no route to host
	E0912 22:05:29.236398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1870\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [3ef34e41bb3ddb710bf398433b9169ba5f99e663f39a763a0e3afc0073f3f7c8] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:07:32.115746       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:35.187699       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:38.259559       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:44.406508       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:07:53.619715       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0912 22:08:15.127151       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-475401\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0912 22:08:15.127284       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0912 22:08:15.127367       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:08:15.203246       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:08:15.203314       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:08:15.203348       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:08:15.213062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:08:15.214172       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:08:15.214207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:08:15.217247       1 config.go:199] "Starting service config controller"
	I0912 22:08:15.217353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:08:15.217455       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:08:15.217472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:08:15.221019       1 config.go:328] "Starting node config controller"
	I0912 22:08:15.222444       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:08:15.318446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:08:15.318456       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:08:15.324732       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [17a4293d12cac1604693dea12017381d2df6f0c1ced577d1d846d40e66520818] <==
	E0912 21:59:45.491176       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 21f2175a-f898-4059-ae91-9df7019f8cdb(kube-system/kube-proxy-fvw4x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.492064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fvw4x\": pod kube-proxy-fvw4x is already assigned to node \"ha-475401-m04\"" pod="kube-system/kube-proxy-fvw4x"
	E0912 21:59:45.490969       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	E0912 21:59:45.493554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d40bd7a6-62a0-4e2d-b6eb-2ec57e8eea0f(kube-system/kindnet-2bvcz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2bvcz"
	E0912 21:59:45.493577       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2bvcz\": pod kindnet-2bvcz is already assigned to node \"ha-475401-m04\"" pod="kube-system/kindnet-2bvcz"
	I0912 21:59:45.493620       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2bvcz" node="ha-475401-m04"
	I0912 21:59:45.493727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fvw4x" node="ha-475401-m04"
	E0912 22:05:32.870502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0912 22:05:32.870603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0912 22:05:39.944299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:42.283628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:43.008046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0912 22:05:43.186989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:43.500608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0912 22:05:44.018356       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0912 22:05:46.250331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0912 22:05:46.988757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0912 22:05:49.945850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0912 22:05:50.100150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0912 22:05:51.189304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	W0912 22:05:52.183460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:05:52.183632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0912 22:05:54.193231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0912 22:05:54.775785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0912 22:05:55.208908       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7bd2f2d4b23f5227aba2f8d0b375b6980f4e8d9699dc8e0a15167b8caee35a90] <==
	W0912 22:08:08.560736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.203:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:08.560852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.203:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.137533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.203:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.137604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.203:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.137612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.203:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.137644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.203:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.266726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.203:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.266793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.203:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:09.552325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.203:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:09.552369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.203:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:11.167472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.203:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.203:8443: connect: connection refused
	E0912 22:08:11.167602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.203:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.203:8443: connect: connection refused" logger="UnhandledError"
	W0912 22:08:13.620823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 22:08:13.620920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:08:13.621073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 22:08:13.621905       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 22:08:13.632452       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:08:13.633205       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0912 22:08:38.787684       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 22:10:05.028712       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s6sjm\": pod busybox-7dff88458-s6sjm is already assigned to node \"ha-475401-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-s6sjm" node="ha-475401-m04"
	E0912 22:10:05.033461       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2df98519-4f61-4b40-858a-e75b5aba6012(default/busybox-7dff88458-s6sjm) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-s6sjm"
	E0912 22:10:05.034844       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s6sjm\": pod busybox-7dff88458-s6sjm is already assigned to node \"ha-475401-m04\"" pod="default/busybox-7dff88458-s6sjm"
	I0912 22:10:05.035052       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-s6sjm" node="ha-475401-m04"
	E0912 22:10:05.091562       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-hw2vx is already present in the active queue" pod="default/busybox-7dff88458-hw2vx"
	E0912 22:10:05.110424       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-hw2vx\" not found" pod="default/busybox-7dff88458-hw2vx"
	
	
	==> kubelet <==
	Sep 12 22:11:26 ha-475401 kubelet[1305]: E0912 22:11:26.748785    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179086748392112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:36 ha-475401 kubelet[1305]: E0912 22:11:36.500457    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:11:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:11:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:11:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:11:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:11:36 ha-475401 kubelet[1305]: E0912 22:11:36.751318    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179096750884686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:36 ha-475401 kubelet[1305]: E0912 22:11:36.751381    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179096750884686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:46 ha-475401 kubelet[1305]: E0912 22:11:46.753939    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179106753572005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:46 ha-475401 kubelet[1305]: E0912 22:11:46.753981    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179106753572005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:56 ha-475401 kubelet[1305]: E0912 22:11:56.758335    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179116757924822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:11:56 ha-475401 kubelet[1305]: E0912 22:11:56.760010    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179116757924822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:06 ha-475401 kubelet[1305]: E0912 22:12:06.763047    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179126762021022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:06 ha-475401 kubelet[1305]: E0912 22:12:06.763088    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179126762021022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:16 ha-475401 kubelet[1305]: E0912 22:12:16.765450    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179136764050235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:16 ha-475401 kubelet[1305]: E0912 22:12:16.766907    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179136764050235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:26 ha-475401 kubelet[1305]: E0912 22:12:26.769489    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179146768947119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:26 ha-475401 kubelet[1305]: E0912 22:12:26.769527    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179146768947119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:36 ha-475401 kubelet[1305]: E0912 22:12:36.500210    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:12:36 ha-475401 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:12:36 ha-475401 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:12:36 ha-475401 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:12:36 ha-475401 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:12:36 ha-475401 kubelet[1305]: E0912 22:12:36.771636    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179156770842849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:12:36 ha-475401 kubelet[1305]: E0912 22:12:36.771730    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726179156770842849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:12:41.013508   34332 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19616-5891/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-475401 -n ha-475401
helpers_test.go:261: (dbg) Run:  kubectl --context ha-475401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-768483
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-768483
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-768483: exit status 82 (2m1.778583113s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-768483-m03"  ...
	* Stopping node "multinode-768483-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-768483" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-768483 --wait=true -v=8 --alsologtostderr
E0912 22:30:05.703685   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:32:07.200131   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:33:08.772610   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-768483 --wait=true -v=8 --alsologtostderr: (3m27.84875218s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-768483
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-768483 -n multinode-768483
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-768483 logs -n 25: (1.45025186s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483:/home/docker/cp-test_multinode-768483-m02_multinode-768483.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483 sudo cat                                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m02_multinode-768483.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03:/home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483-m03 sudo cat                                   | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp testdata/cp-test.txt                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483:/home/docker/cp-test_multinode-768483-m03_multinode-768483.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483 sudo cat                                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m03_multinode-768483.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02:/home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483-m02 sudo cat                                   | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-768483 node stop m03                                                          | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	| node    | multinode-768483 node start                                                             | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-768483                                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC |                     |
	| stop    | -p multinode-768483                                                                     | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC |                     |
	| start   | -p multinode-768483                                                                     | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-768483                                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:29:50.429762   44139 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:29:50.429993   44139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:50.430001   44139 out.go:358] Setting ErrFile to fd 2...
	I0912 22:29:50.430005   44139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:50.430204   44139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:29:50.430741   44139 out.go:352] Setting JSON to false
	I0912 22:29:50.431633   44139 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4332,"bootTime":1726175858,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:29:50.431696   44139 start.go:139] virtualization: kvm guest
	I0912 22:29:50.434750   44139 out.go:177] * [multinode-768483] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:29:50.436223   44139 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:29:50.436224   44139 notify.go:220] Checking for updates...
	I0912 22:29:50.438708   44139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:29:50.440557   44139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:29:50.442044   44139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:29:50.443350   44139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:29:50.444575   44139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:29:50.446101   44139 config.go:182] Loaded profile config "multinode-768483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:29:50.446193   44139 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:29:50.446601   44139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:29:50.446656   44139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:29:50.461730   44139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0912 22:29:50.462158   44139 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:29:50.462620   44139 main.go:141] libmachine: Using API Version  1
	I0912 22:29:50.462638   44139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:29:50.462992   44139 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:29:50.463171   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.498694   44139 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:29:50.499889   44139 start.go:297] selected driver: kvm2
	I0912 22:29:50.499907   44139 start.go:901] validating driver "kvm2" against &{Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:50.500105   44139 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:29:50.500518   44139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:29:50.500606   44139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:29:50.515636   44139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:29:50.516288   44139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:29:50.516347   44139 cni.go:84] Creating CNI manager for ""
	I0912 22:29:50.516356   44139 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0912 22:29:50.516408   44139 start.go:340] cluster config:
	{Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:50.516515   44139 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:29:50.520807   44139 out.go:177] * Starting "multinode-768483" primary control-plane node in "multinode-768483" cluster
	I0912 22:29:50.524835   44139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:29:50.524888   44139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:29:50.524897   44139 cache.go:56] Caching tarball of preloaded images
	I0912 22:29:50.524983   44139 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:29:50.524994   44139 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 22:29:50.525119   44139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/config.json ...
	I0912 22:29:50.525359   44139 start.go:360] acquireMachinesLock for multinode-768483: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:29:50.525413   44139 start.go:364] duration metric: took 25.593µs to acquireMachinesLock for "multinode-768483"
	I0912 22:29:50.525426   44139 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:29:50.525431   44139 fix.go:54] fixHost starting: 
	I0912 22:29:50.525742   44139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:29:50.525775   44139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:29:50.540264   44139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
	I0912 22:29:50.540652   44139 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:29:50.541085   44139 main.go:141] libmachine: Using API Version  1
	I0912 22:29:50.541107   44139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:29:50.541416   44139 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:29:50.541573   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.541738   44139 main.go:141] libmachine: (multinode-768483) Calling .GetState
	I0912 22:29:50.543765   44139 fix.go:112] recreateIfNeeded on multinode-768483: state=Running err=<nil>
	W0912 22:29:50.543799   44139 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 22:29:50.546441   44139 out.go:177] * Updating the running kvm2 "multinode-768483" VM ...
	I0912 22:29:50.547797   44139 machine.go:93] provisionDockerMachine start ...
	I0912 22:29:50.547816   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.548011   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.550826   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.551220   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.551262   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.551418   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.551563   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.551708   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.551828   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.551951   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.552185   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.552200   44139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:29:50.671150   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-768483
	
	I0912 22:29:50.671187   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.671471   44139 buildroot.go:166] provisioning hostname "multinode-768483"
	I0912 22:29:50.671501   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.671681   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.674468   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.675007   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.675039   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.675273   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.675549   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.675738   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.675926   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.676170   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.676343   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.676360   44139 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-768483 && echo "multinode-768483" | sudo tee /etc/hostname
	I0912 22:29:50.805530   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-768483
	
	I0912 22:29:50.805554   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.808566   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.808987   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.809013   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.809134   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.809314   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.809560   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.809702   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.809873   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.810129   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.810149   44139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-768483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-768483/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-768483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:29:50.922502   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:29:50.922532   44139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:29:50.922561   44139 buildroot.go:174] setting up certificates
	I0912 22:29:50.922573   44139 provision.go:84] configureAuth start
	I0912 22:29:50.922587   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.922864   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:29:50.925734   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.926104   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.926124   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.926288   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.928446   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.928761   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.928797   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.928903   44139 provision.go:143] copyHostCerts
	I0912 22:29:50.928938   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:29:50.928974   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:29:50.928990   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:29:50.929075   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:29:50.929243   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:29:50.929273   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:29:50.929282   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:29:50.929335   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:29:50.929402   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:29:50.929429   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:29:50.929438   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:29:50.929474   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:29:50.929536   44139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.multinode-768483 san=[127.0.0.1 192.168.39.28 localhost minikube multinode-768483]
	I0912 22:29:51.144081   44139 provision.go:177] copyRemoteCerts
	I0912 22:29:51.144136   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:29:51.144158   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:51.146729   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.147085   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:51.147128   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.147245   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:51.147419   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.147564   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:51.147665   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:29:51.231700   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:29:51.231773   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:29:51.255442   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:29:51.255506   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 22:29:51.279989   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:29:51.280063   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:29:51.302250   44139 provision.go:87] duration metric: took 379.665576ms to configureAuth
	I0912 22:29:51.302275   44139 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:29:51.302498   44139 config.go:182] Loaded profile config "multinode-768483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:29:51.302559   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:51.305557   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.306105   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:51.306131   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.306317   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:51.306529   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.306711   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.306877   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:51.307042   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:51.307236   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:51.307254   44139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:31:21.951473   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:31:21.951499   44139 machine.go:96] duration metric: took 1m31.403688955s to provisionDockerMachine
	I0912 22:31:21.951522   44139 start.go:293] postStartSetup for "multinode-768483" (driver="kvm2")
	I0912 22:31:21.951533   44139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:31:21.951548   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:21.951849   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:31:21.951874   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:21.955323   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:21.955965   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:21.955991   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:21.956187   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:21.956423   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:21.956603   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:21.956788   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.046112   44139 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:31:22.050120   44139 command_runner.go:130] > NAME=Buildroot
	I0912 22:31:22.050137   44139 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0912 22:31:22.050144   44139 command_runner.go:130] > ID=buildroot
	I0912 22:31:22.050150   44139 command_runner.go:130] > VERSION_ID=2023.02.9
	I0912 22:31:22.050158   44139 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0912 22:31:22.050315   44139 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:31:22.050338   44139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:31:22.050412   44139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:31:22.050492   44139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:31:22.050509   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 22:31:22.050605   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:31:22.061250   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:31:22.084881   44139 start.go:296] duration metric: took 133.324038ms for postStartSetup
	I0912 22:31:22.084929   44139 fix.go:56] duration metric: took 1m31.559496697s for fixHost
	I0912 22:31:22.084953   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.087609   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.087983   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.088008   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.088163   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.088382   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.088532   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.088646   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.088814   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:31:22.088985   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:31:22.088996   44139 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:31:22.198081   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726180282.174502281
	
	I0912 22:31:22.198112   44139 fix.go:216] guest clock: 1726180282.174502281
	I0912 22:31:22.198124   44139 fix.go:229] Guest: 2024-09-12 22:31:22.174502281 +0000 UTC Remote: 2024-09-12 22:31:22.084933745 +0000 UTC m=+91.690259611 (delta=89.568536ms)
	I0912 22:31:22.198153   44139 fix.go:200] guest clock delta is within tolerance: 89.568536ms
	I0912 22:31:22.198165   44139 start.go:83] releasing machines lock for "multinode-768483", held for 1m31.67274177s
	I0912 22:31:22.198193   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.198478   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:31:22.201101   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.201455   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.201491   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.201636   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202340   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202481   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202552   44139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:31:22.202599   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.202713   44139 ssh_runner.go:195] Run: cat /version.json
	I0912 22:31:22.202747   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.205335   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205711   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.205741   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205894   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205901   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.206098   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.206256   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.206391   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.206395   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.206418   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.206567   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.206715   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.206900   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.207019   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.323124   44139 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 22:31:22.323971   44139 command_runner.go:130] > {"iso_version": "v1.34.0-1726156389-19616", "kicbase_version": "v0.0.45-1725963390-19606", "minikube_version": "v1.34.0", "commit": "5022c44a3509464df545efc115fbb6c3f1b5e972"}
	I0912 22:31:22.324135   44139 ssh_runner.go:195] Run: systemctl --version
	I0912 22:31:22.329664   44139 command_runner.go:130] > systemd 252 (252)
	I0912 22:31:22.329692   44139 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0912 22:31:22.329888   44139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:31:22.493499   44139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:31:22.499028   44139 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 22:31:22.499134   44139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:31:22.499192   44139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:31:22.508504   44139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:31:22.508526   44139 start.go:495] detecting cgroup driver to use...
	I0912 22:31:22.508623   44139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:31:22.525119   44139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:31:22.538949   44139 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:31:22.539049   44139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:31:22.553483   44139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:31:22.568286   44139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:31:22.721088   44139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:31:22.866542   44139 docker.go:233] disabling docker service ...
	I0912 22:31:22.866607   44139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:31:22.889787   44139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:31:22.903862   44139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:31:23.045114   44139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:31:23.183642   44139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:31:23.197976   44139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:31:23.215227   44139 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0912 22:31:23.215272   44139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 22:31:23.215331   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.225423   44139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:31:23.225486   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.235791   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.245642   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.255728   44139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:31:23.266576   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.276598   44139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.286432   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.296427   44139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:31:23.305495   44139 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 22:31:23.305601   44139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:31:23.314684   44139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:31:23.449770   44139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:31:31.772339   44139 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.322537669s)
	I0912 22:31:31.772367   44139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:31:31.772413   44139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:31:31.777317   44139 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0912 22:31:31.777348   44139 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 22:31:31.777358   44139 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0912 22:31:31.777368   44139 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:31:31.777376   44139 command_runner.go:130] > Access: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777387   44139 command_runner.go:130] > Modify: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777395   44139 command_runner.go:130] > Change: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777400   44139 command_runner.go:130] >  Birth: -
	I0912 22:31:31.777420   44139 start.go:563] Will wait 60s for crictl version
	I0912 22:31:31.777463   44139 ssh_runner.go:195] Run: which crictl
	I0912 22:31:31.780962   44139 command_runner.go:130] > /usr/bin/crictl
	I0912 22:31:31.781015   44139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:31:31.816889   44139 command_runner.go:130] > Version:  0.1.0
	I0912 22:31:31.816912   44139 command_runner.go:130] > RuntimeName:  cri-o
	I0912 22:31:31.816918   44139 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0912 22:31:31.816925   44139 command_runner.go:130] > RuntimeApiVersion:  v1
	I0912 22:31:31.816992   44139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:31:31.817091   44139 ssh_runner.go:195] Run: crio --version
	I0912 22:31:31.843722   44139 command_runner.go:130] > crio version 1.29.1
	I0912 22:31:31.843743   44139 command_runner.go:130] > Version:        1.29.1
	I0912 22:31:31.843751   44139 command_runner.go:130] > GitCommit:      unknown
	I0912 22:31:31.843755   44139 command_runner.go:130] > GitCommitDate:  unknown
	I0912 22:31:31.843759   44139 command_runner.go:130] > GitTreeState:   clean
	I0912 22:31:31.843765   44139 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0912 22:31:31.843769   44139 command_runner.go:130] > GoVersion:      go1.21.6
	I0912 22:31:31.843773   44139 command_runner.go:130] > Compiler:       gc
	I0912 22:31:31.843777   44139 command_runner.go:130] > Platform:       linux/amd64
	I0912 22:31:31.843787   44139 command_runner.go:130] > Linkmode:       dynamic
	I0912 22:31:31.843800   44139 command_runner.go:130] > BuildTags:      
	I0912 22:31:31.843807   44139 command_runner.go:130] >   containers_image_ostree_stub
	I0912 22:31:31.843816   44139 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0912 22:31:31.843824   44139 command_runner.go:130] >   btrfs_noversion
	I0912 22:31:31.843832   44139 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0912 22:31:31.843843   44139 command_runner.go:130] >   libdm_no_deferred_remove
	I0912 22:31:31.843847   44139 command_runner.go:130] >   seccomp
	I0912 22:31:31.843852   44139 command_runner.go:130] > LDFlags:          unknown
	I0912 22:31:31.843855   44139 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:31:31.843860   44139 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:31:31.843939   44139 ssh_runner.go:195] Run: crio --version
	I0912 22:31:31.874909   44139 command_runner.go:130] > crio version 1.29.1
	I0912 22:31:31.874934   44139 command_runner.go:130] > Version:        1.29.1
	I0912 22:31:31.874940   44139 command_runner.go:130] > GitCommit:      unknown
	I0912 22:31:31.874944   44139 command_runner.go:130] > GitCommitDate:  unknown
	I0912 22:31:31.874948   44139 command_runner.go:130] > GitTreeState:   clean
	I0912 22:31:31.874954   44139 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0912 22:31:31.874958   44139 command_runner.go:130] > GoVersion:      go1.21.6
	I0912 22:31:31.874963   44139 command_runner.go:130] > Compiler:       gc
	I0912 22:31:31.874967   44139 command_runner.go:130] > Platform:       linux/amd64
	I0912 22:31:31.874971   44139 command_runner.go:130] > Linkmode:       dynamic
	I0912 22:31:31.874976   44139 command_runner.go:130] > BuildTags:      
	I0912 22:31:31.874983   44139 command_runner.go:130] >   containers_image_ostree_stub
	I0912 22:31:31.874990   44139 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0912 22:31:31.874995   44139 command_runner.go:130] >   btrfs_noversion
	I0912 22:31:31.875002   44139 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0912 22:31:31.875013   44139 command_runner.go:130] >   libdm_no_deferred_remove
	I0912 22:31:31.875019   44139 command_runner.go:130] >   seccomp
	I0912 22:31:31.875026   44139 command_runner.go:130] > LDFlags:          unknown
	I0912 22:31:31.875034   44139 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:31:31.875038   44139 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:31:31.878333   44139 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 22:31:31.879817   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:31:31.882687   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:31.883054   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:31.883081   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:31.883271   44139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:31:31.887368   44139 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0912 22:31:31.887481   44139 kubeadm.go:883] updating cluster {Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:31:31.887718   44139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:31:31.887767   44139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:31:31.930477   44139 command_runner.go:130] > {
	I0912 22:31:31.930505   44139 command_runner.go:130] >   "images": [
	I0912 22:31:31.930530   44139 command_runner.go:130] >     {
	I0912 22:31:31.930543   44139 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0912 22:31:31.930550   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930559   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0912 22:31:31.930565   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930572   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930585   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0912 22:31:31.930601   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0912 22:31:31.930608   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930615   44139 command_runner.go:130] >       "size": "87190579",
	I0912 22:31:31.930621   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930629   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930664   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930674   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930680   44139 command_runner.go:130] >     },
	I0912 22:31:31.930688   44139 command_runner.go:130] >     {
	I0912 22:31:31.930697   44139 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0912 22:31:31.930707   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930716   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0912 22:31:31.930725   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930732   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930746   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0912 22:31:31.930761   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0912 22:31:31.930770   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930779   44139 command_runner.go:130] >       "size": "1363676",
	I0912 22:31:31.930788   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930812   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930821   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930828   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930833   44139 command_runner.go:130] >     },
	I0912 22:31:31.930838   44139 command_runner.go:130] >     {
	I0912 22:31:31.930847   44139 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:31:31.930855   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930871   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:31:31.930879   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930887   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930900   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:31:31.930914   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:31:31.930924   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930934   44139 command_runner.go:130] >       "size": "31470524",
	I0912 22:31:31.930943   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930948   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930957   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930963   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930971   44139 command_runner.go:130] >     },
	I0912 22:31:31.930977   44139 command_runner.go:130] >     {
	I0912 22:31:31.930988   44139 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0912 22:31:31.930997   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931005   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0912 22:31:31.931012   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931017   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931030   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0912 22:31:31.931051   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0912 22:31:31.931060   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931066   44139 command_runner.go:130] >       "size": "63273227",
	I0912 22:31:31.931075   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.931082   44139 command_runner.go:130] >       "username": "nonroot",
	I0912 22:31:31.931090   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931098   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931103   44139 command_runner.go:130] >     },
	I0912 22:31:31.931110   44139 command_runner.go:130] >     {
	I0912 22:31:31.931119   44139 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0912 22:31:31.931127   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931133   44139 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0912 22:31:31.931141   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931150   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931163   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0912 22:31:31.931179   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0912 22:31:31.931188   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931202   44139 command_runner.go:130] >       "size": "149009664",
	I0912 22:31:31.931211   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931221   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931229   44139 command_runner.go:130] >       },
	I0912 22:31:31.931234   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931242   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931248   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931255   44139 command_runner.go:130] >     },
	I0912 22:31:31.931260   44139 command_runner.go:130] >     {
	I0912 22:31:31.931271   44139 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0912 22:31:31.931280   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931288   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0912 22:31:31.931297   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931305   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931319   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0912 22:31:31.931333   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0912 22:31:31.931342   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931349   44139 command_runner.go:130] >       "size": "95237600",
	I0912 22:31:31.931358   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931365   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931374   44139 command_runner.go:130] >       },
	I0912 22:31:31.931381   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931390   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931399   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931406   44139 command_runner.go:130] >     },
	I0912 22:31:31.931411   44139 command_runner.go:130] >     {
	I0912 22:31:31.931423   44139 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0912 22:31:31.931433   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931442   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0912 22:31:31.931450   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931457   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931471   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0912 22:31:31.931486   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0912 22:31:31.931495   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931505   44139 command_runner.go:130] >       "size": "89437508",
	I0912 22:31:31.931518   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931536   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931545   44139 command_runner.go:130] >       },
	I0912 22:31:31.931552   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931562   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931571   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931578   44139 command_runner.go:130] >     },
	I0912 22:31:31.931586   44139 command_runner.go:130] >     {
	I0912 22:31:31.931595   44139 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0912 22:31:31.931601   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931611   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0912 22:31:31.931619   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931628   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931656   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0912 22:31:31.931670   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0912 22:31:31.931676   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931684   44139 command_runner.go:130] >       "size": "92733849",
	I0912 22:31:31.931693   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.931699   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931705   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931711   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931717   44139 command_runner.go:130] >     },
	I0912 22:31:31.931721   44139 command_runner.go:130] >     {
	I0912 22:31:31.931729   44139 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0912 22:31:31.931734   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931741   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0912 22:31:31.931746   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931751   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931761   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0912 22:31:31.931771   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0912 22:31:31.931775   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931782   44139 command_runner.go:130] >       "size": "68420934",
	I0912 22:31:31.931787   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931793   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931798   44139 command_runner.go:130] >       },
	I0912 22:31:31.931804   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931814   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931831   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931840   44139 command_runner.go:130] >     },
	I0912 22:31:31.931846   44139 command_runner.go:130] >     {
	I0912 22:31:31.931857   44139 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0912 22:31:31.931866   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931880   44139 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0912 22:31:31.931889   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931897   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931909   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0912 22:31:31.931922   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0912 22:31:31.931930   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931936   44139 command_runner.go:130] >       "size": "742080",
	I0912 22:31:31.931943   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931949   44139 command_runner.go:130] >         "value": "65535"
	I0912 22:31:31.931957   44139 command_runner.go:130] >       },
	I0912 22:31:31.931963   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931969   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931977   44139 command_runner.go:130] >       "pinned": true
	I0912 22:31:31.931982   44139 command_runner.go:130] >     }
	I0912 22:31:31.931990   44139 command_runner.go:130] >   ]
	I0912 22:31:31.931996   44139 command_runner.go:130] > }
	I0912 22:31:31.932311   44139 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:31:31.932336   44139 crio.go:433] Images already preloaded, skipping extraction
	I0912 22:31:31.932388   44139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:31:31.968061   44139 command_runner.go:130] > {
	I0912 22:31:31.968103   44139 command_runner.go:130] >   "images": [
	I0912 22:31:31.968109   44139 command_runner.go:130] >     {
	I0912 22:31:31.968117   44139 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0912 22:31:31.968122   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968129   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0912 22:31:31.968133   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968137   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968147   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0912 22:31:31.968154   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0912 22:31:31.968158   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968163   44139 command_runner.go:130] >       "size": "87190579",
	I0912 22:31:31.968167   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968171   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968176   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968184   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968187   44139 command_runner.go:130] >     },
	I0912 22:31:31.968190   44139 command_runner.go:130] >     {
	I0912 22:31:31.968196   44139 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0912 22:31:31.968202   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968207   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0912 22:31:31.968211   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968215   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968221   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0912 22:31:31.968232   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0912 22:31:31.968236   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968244   44139 command_runner.go:130] >       "size": "1363676",
	I0912 22:31:31.968248   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968261   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968271   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968278   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968287   44139 command_runner.go:130] >     },
	I0912 22:31:31.968290   44139 command_runner.go:130] >     {
	I0912 22:31:31.968297   44139 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:31:31.968302   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968309   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:31:31.968314   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968318   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968326   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:31:31.968336   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:31:31.968340   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968346   44139 command_runner.go:130] >       "size": "31470524",
	I0912 22:31:31.968350   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968355   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968361   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968365   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968369   44139 command_runner.go:130] >     },
	I0912 22:31:31.968372   44139 command_runner.go:130] >     {
	I0912 22:31:31.968380   44139 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0912 22:31:31.968385   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968391   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0912 22:31:31.968395   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968399   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968409   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0912 22:31:31.968426   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0912 22:31:31.968434   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968438   44139 command_runner.go:130] >       "size": "63273227",
	I0912 22:31:31.968441   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968445   44139 command_runner.go:130] >       "username": "nonroot",
	I0912 22:31:31.968449   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968454   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968457   44139 command_runner.go:130] >     },
	I0912 22:31:31.968461   44139 command_runner.go:130] >     {
	I0912 22:31:31.968467   44139 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0912 22:31:31.968473   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968478   44139 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0912 22:31:31.968484   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968488   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968495   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0912 22:31:31.968511   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0912 22:31:31.968518   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968523   44139 command_runner.go:130] >       "size": "149009664",
	I0912 22:31:31.968530   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968534   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968541   44139 command_runner.go:130] >       },
	I0912 22:31:31.968545   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968551   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968556   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968563   44139 command_runner.go:130] >     },
	I0912 22:31:31.968566   44139 command_runner.go:130] >     {
	I0912 22:31:31.968572   44139 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0912 22:31:31.968579   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968584   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0912 22:31:31.968591   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968595   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968602   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0912 22:31:31.968612   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0912 22:31:31.968615   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968620   44139 command_runner.go:130] >       "size": "95237600",
	I0912 22:31:31.968627   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968631   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968635   44139 command_runner.go:130] >       },
	I0912 22:31:31.968639   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968643   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968647   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968650   44139 command_runner.go:130] >     },
	I0912 22:31:31.968653   44139 command_runner.go:130] >     {
	I0912 22:31:31.968663   44139 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0912 22:31:31.968667   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968675   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0912 22:31:31.968679   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968683   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968690   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0912 22:31:31.968700   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0912 22:31:31.968704   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968709   44139 command_runner.go:130] >       "size": "89437508",
	I0912 22:31:31.968715   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968719   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968722   44139 command_runner.go:130] >       },
	I0912 22:31:31.968726   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968730   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968734   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968737   44139 command_runner.go:130] >     },
	I0912 22:31:31.968740   44139 command_runner.go:130] >     {
	I0912 22:31:31.968748   44139 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0912 22:31:31.968752   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968758   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0912 22:31:31.968766   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968771   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968791   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0912 22:31:31.968800   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0912 22:31:31.968804   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968809   44139 command_runner.go:130] >       "size": "92733849",
	I0912 22:31:31.968815   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968819   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968825   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968829   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968833   44139 command_runner.go:130] >     },
	I0912 22:31:31.968837   44139 command_runner.go:130] >     {
	I0912 22:31:31.968842   44139 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0912 22:31:31.968849   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968855   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0912 22:31:31.968865   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968878   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968894   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0912 22:31:31.968912   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0912 22:31:31.968920   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968925   44139 command_runner.go:130] >       "size": "68420934",
	I0912 22:31:31.968928   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968933   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968940   44139 command_runner.go:130] >       },
	I0912 22:31:31.968944   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968948   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968952   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968956   44139 command_runner.go:130] >     },
	I0912 22:31:31.968959   44139 command_runner.go:130] >     {
	I0912 22:31:31.968968   44139 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0912 22:31:31.968972   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968980   44139 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0912 22:31:31.968985   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968993   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.969001   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0912 22:31:31.969011   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0912 22:31:31.969017   44139 command_runner.go:130] >       ],
	I0912 22:31:31.969021   44139 command_runner.go:130] >       "size": "742080",
	I0912 22:31:31.969028   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.969032   44139 command_runner.go:130] >         "value": "65535"
	I0912 22:31:31.969036   44139 command_runner.go:130] >       },
	I0912 22:31:31.969040   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.969047   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.969052   44139 command_runner.go:130] >       "pinned": true
	I0912 22:31:31.969055   44139 command_runner.go:130] >     }
	I0912 22:31:31.969058   44139 command_runner.go:130] >   ]
	I0912 22:31:31.969062   44139 command_runner.go:130] > }
	I0912 22:31:31.969183   44139 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:31:31.969194   44139 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:31:31.969202   44139 kubeadm.go:934] updating node { 192.168.39.28 8443 v1.31.1 crio true true} ...
	I0912 22:31:31.969308   44139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-768483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:31:31.969386   44139 ssh_runner.go:195] Run: crio config
	I0912 22:31:32.001728   44139 command_runner.go:130] ! time="2024-09-12 22:31:31.977601025Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0912 22:31:32.007980   44139 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0912 22:31:32.015771   44139 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0912 22:31:32.015789   44139 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0912 22:31:32.015799   44139 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0912 22:31:32.015804   44139 command_runner.go:130] > #
	I0912 22:31:32.015810   44139 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0912 22:31:32.015816   44139 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0912 22:31:32.015822   44139 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0912 22:31:32.015829   44139 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0912 22:31:32.015834   44139 command_runner.go:130] > # reload'.
	I0912 22:31:32.015840   44139 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0912 22:31:32.015849   44139 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0912 22:31:32.015861   44139 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0912 22:31:32.015872   44139 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0912 22:31:32.015880   44139 command_runner.go:130] > [crio]
	I0912 22:31:32.015888   44139 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0912 22:31:32.015898   44139 command_runner.go:130] > # containers images, in this directory.
	I0912 22:31:32.015905   44139 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0912 22:31:32.015918   44139 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0912 22:31:32.015929   44139 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0912 22:31:32.015942   44139 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0912 22:31:32.015949   44139 command_runner.go:130] > # imagestore = ""
	I0912 22:31:32.015959   44139 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0912 22:31:32.015972   44139 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0912 22:31:32.015981   44139 command_runner.go:130] > storage_driver = "overlay"
	I0912 22:31:32.015989   44139 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0912 22:31:32.015999   44139 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0912 22:31:32.016004   44139 command_runner.go:130] > storage_option = [
	I0912 22:31:32.016008   44139 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0912 22:31:32.016014   44139 command_runner.go:130] > ]
	I0912 22:31:32.016020   44139 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0912 22:31:32.016028   44139 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0912 22:31:32.016032   44139 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0912 22:31:32.016041   44139 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0912 22:31:32.016048   44139 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0912 22:31:32.016052   44139 command_runner.go:130] > # always happen on a node reboot
	I0912 22:31:32.016057   44139 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0912 22:31:32.016068   44139 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0912 22:31:32.016076   44139 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0912 22:31:32.016082   44139 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0912 22:31:32.016090   44139 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0912 22:31:32.016097   44139 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0912 22:31:32.016106   44139 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0912 22:31:32.016113   44139 command_runner.go:130] > # internal_wipe = true
	I0912 22:31:32.016120   44139 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0912 22:31:32.016143   44139 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0912 22:31:32.016158   44139 command_runner.go:130] > # internal_repair = false
	I0912 22:31:32.016163   44139 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0912 22:31:32.016170   44139 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0912 22:31:32.016178   44139 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0912 22:31:32.016183   44139 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0912 22:31:32.016191   44139 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0912 22:31:32.016197   44139 command_runner.go:130] > [crio.api]
	I0912 22:31:32.016202   44139 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0912 22:31:32.016208   44139 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0912 22:31:32.016213   44139 command_runner.go:130] > # IP address on which the stream server will listen.
	I0912 22:31:32.016217   44139 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0912 22:31:32.016224   44139 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0912 22:31:32.016229   44139 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0912 22:31:32.016234   44139 command_runner.go:130] > # stream_port = "0"
	I0912 22:31:32.016239   44139 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0912 22:31:32.016244   44139 command_runner.go:130] > # stream_enable_tls = false
	I0912 22:31:32.016250   44139 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0912 22:31:32.016256   44139 command_runner.go:130] > # stream_idle_timeout = ""
	I0912 22:31:32.016264   44139 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0912 22:31:32.016272   44139 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0912 22:31:32.016276   44139 command_runner.go:130] > # minutes.
	I0912 22:31:32.016282   44139 command_runner.go:130] > # stream_tls_cert = ""
	I0912 22:31:32.016288   44139 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0912 22:31:32.016296   44139 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0912 22:31:32.016300   44139 command_runner.go:130] > # stream_tls_key = ""
	I0912 22:31:32.016306   44139 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0912 22:31:32.016314   44139 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0912 22:31:32.016327   44139 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0912 22:31:32.016333   44139 command_runner.go:130] > # stream_tls_ca = ""
	I0912 22:31:32.016342   44139 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0912 22:31:32.016349   44139 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0912 22:31:32.016356   44139 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0912 22:31:32.016363   44139 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0912 22:31:32.016368   44139 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0912 22:31:32.016376   44139 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0912 22:31:32.016380   44139 command_runner.go:130] > [crio.runtime]
	I0912 22:31:32.016388   44139 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0912 22:31:32.016393   44139 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0912 22:31:32.016397   44139 command_runner.go:130] > # "nofile=1024:2048"
	I0912 22:31:32.016403   44139 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0912 22:31:32.016410   44139 command_runner.go:130] > # default_ulimits = [
	I0912 22:31:32.016414   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016420   44139 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0912 22:31:32.016427   44139 command_runner.go:130] > # no_pivot = false
	I0912 22:31:32.016433   44139 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0912 22:31:32.016442   44139 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0912 22:31:32.016447   44139 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0912 22:31:32.016455   44139 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0912 22:31:32.016460   44139 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0912 22:31:32.016466   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:31:32.016472   44139 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0912 22:31:32.016477   44139 command_runner.go:130] > # Cgroup setting for conmon
	I0912 22:31:32.016485   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0912 22:31:32.016488   44139 command_runner.go:130] > conmon_cgroup = "pod"
	I0912 22:31:32.016499   44139 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0912 22:31:32.016506   44139 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0912 22:31:32.016519   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:31:32.016525   44139 command_runner.go:130] > conmon_env = [
	I0912 22:31:32.016531   44139 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0912 22:31:32.016536   44139 command_runner.go:130] > ]
	I0912 22:31:32.016542   44139 command_runner.go:130] > # Additional environment variables to set for all the
	I0912 22:31:32.016546   44139 command_runner.go:130] > # containers. These are overridden if set in the
	I0912 22:31:32.016554   44139 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0912 22:31:32.016559   44139 command_runner.go:130] > # default_env = [
	I0912 22:31:32.016565   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016574   44139 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0912 22:31:32.016583   44139 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0912 22:31:32.016590   44139 command_runner.go:130] > # selinux = false
	I0912 22:31:32.016596   44139 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0912 22:31:32.016604   44139 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0912 22:31:32.016610   44139 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0912 22:31:32.016614   44139 command_runner.go:130] > # seccomp_profile = ""
	I0912 22:31:32.016620   44139 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0912 22:31:32.016627   44139 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0912 22:31:32.016633   44139 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0912 22:31:32.016640   44139 command_runner.go:130] > # which might increase security.
	I0912 22:31:32.016644   44139 command_runner.go:130] > # This option is currently deprecated,
	I0912 22:31:32.016650   44139 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0912 22:31:32.016655   44139 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0912 22:31:32.016662   44139 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0912 22:31:32.016670   44139 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0912 22:31:32.016677   44139 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0912 22:31:32.016685   44139 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0912 22:31:32.016690   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.016697   44139 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0912 22:31:32.016703   44139 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0912 22:31:32.016710   44139 command_runner.go:130] > # the cgroup blockio controller.
	I0912 22:31:32.016714   44139 command_runner.go:130] > # blockio_config_file = ""
	I0912 22:31:32.016720   44139 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0912 22:31:32.016726   44139 command_runner.go:130] > # blockio parameters.
	I0912 22:31:32.016730   44139 command_runner.go:130] > # blockio_reload = false
	I0912 22:31:32.016736   44139 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0912 22:31:32.016742   44139 command_runner.go:130] > # irqbalance daemon.
	I0912 22:31:32.016747   44139 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0912 22:31:32.016753   44139 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0912 22:31:32.016761   44139 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0912 22:31:32.016767   44139 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0912 22:31:32.016775   44139 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0912 22:31:32.016782   44139 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0912 22:31:32.016788   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.016793   44139 command_runner.go:130] > # rdt_config_file = ""
	I0912 22:31:32.016801   44139 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0912 22:31:32.016805   44139 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0912 22:31:32.016821   44139 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0912 22:31:32.016827   44139 command_runner.go:130] > # separate_pull_cgroup = ""
	I0912 22:31:32.016833   44139 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0912 22:31:32.016841   44139 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0912 22:31:32.016845   44139 command_runner.go:130] > # will be added.
	I0912 22:31:32.016850   44139 command_runner.go:130] > # default_capabilities = [
	I0912 22:31:32.016854   44139 command_runner.go:130] > # 	"CHOWN",
	I0912 22:31:32.016860   44139 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0912 22:31:32.016864   44139 command_runner.go:130] > # 	"FSETID",
	I0912 22:31:32.016867   44139 command_runner.go:130] > # 	"FOWNER",
	I0912 22:31:32.016871   44139 command_runner.go:130] > # 	"SETGID",
	I0912 22:31:32.016874   44139 command_runner.go:130] > # 	"SETUID",
	I0912 22:31:32.016878   44139 command_runner.go:130] > # 	"SETPCAP",
	I0912 22:31:32.016882   44139 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0912 22:31:32.016886   44139 command_runner.go:130] > # 	"KILL",
	I0912 22:31:32.016890   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016897   44139 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0912 22:31:32.016906   44139 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0912 22:31:32.016911   44139 command_runner.go:130] > # add_inheritable_capabilities = false
	I0912 22:31:32.016917   44139 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0912 22:31:32.016923   44139 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:31:32.016929   44139 command_runner.go:130] > default_sysctls = [
	I0912 22:31:32.016934   44139 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0912 22:31:32.016937   44139 command_runner.go:130] > ]
	I0912 22:31:32.016942   44139 command_runner.go:130] > # List of devices on the host that a
	I0912 22:31:32.016950   44139 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0912 22:31:32.016954   44139 command_runner.go:130] > # allowed_devices = [
	I0912 22:31:32.016959   44139 command_runner.go:130] > # 	"/dev/fuse",
	I0912 22:31:32.016962   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016967   44139 command_runner.go:130] > # List of additional devices. specified as
	I0912 22:31:32.016973   44139 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0912 22:31:32.016980   44139 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0912 22:31:32.016986   44139 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:31:32.016992   44139 command_runner.go:130] > # additional_devices = [
	I0912 22:31:32.016996   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017001   44139 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0912 22:31:32.017007   44139 command_runner.go:130] > # cdi_spec_dirs = [
	I0912 22:31:32.017011   44139 command_runner.go:130] > # 	"/etc/cdi",
	I0912 22:31:32.017014   44139 command_runner.go:130] > # 	"/var/run/cdi",
	I0912 22:31:32.017020   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017026   44139 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0912 22:31:32.017033   44139 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0912 22:31:32.017038   44139 command_runner.go:130] > # Defaults to false.
	I0912 22:31:32.017045   44139 command_runner.go:130] > # device_ownership_from_security_context = false
	I0912 22:31:32.017051   44139 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0912 22:31:32.017056   44139 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0912 22:31:32.017061   44139 command_runner.go:130] > # hooks_dir = [
	I0912 22:31:32.017065   44139 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0912 22:31:32.017071   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017076   44139 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0912 22:31:32.017085   44139 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0912 22:31:32.017090   44139 command_runner.go:130] > # its default mounts from the following two files:
	I0912 22:31:32.017095   44139 command_runner.go:130] > #
	I0912 22:31:32.017101   44139 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0912 22:31:32.017108   44139 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0912 22:31:32.017113   44139 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0912 22:31:32.017118   44139 command_runner.go:130] > #
	I0912 22:31:32.017124   44139 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0912 22:31:32.017132   44139 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0912 22:31:32.017138   44139 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0912 22:31:32.017143   44139 command_runner.go:130] > #      only add mounts it finds in this file.
	I0912 22:31:32.017148   44139 command_runner.go:130] > #
	I0912 22:31:32.017152   44139 command_runner.go:130] > # default_mounts_file = ""
	I0912 22:31:32.017159   44139 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0912 22:31:32.017165   44139 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0912 22:31:32.017171   44139 command_runner.go:130] > pids_limit = 1024
	I0912 22:31:32.017177   44139 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0912 22:31:32.017184   44139 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0912 22:31:32.017190   44139 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0912 22:31:32.017200   44139 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0912 22:31:32.017204   44139 command_runner.go:130] > # log_size_max = -1
	I0912 22:31:32.017211   44139 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0912 22:31:32.017218   44139 command_runner.go:130] > # log_to_journald = false
	I0912 22:31:32.017223   44139 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0912 22:31:32.017228   44139 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0912 22:31:32.017233   44139 command_runner.go:130] > # Path to directory for container attach sockets.
	I0912 22:31:32.017238   44139 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0912 22:31:32.017246   44139 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0912 22:31:32.017250   44139 command_runner.go:130] > # bind_mount_prefix = ""
	I0912 22:31:32.017258   44139 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0912 22:31:32.017264   44139 command_runner.go:130] > # read_only = false
	I0912 22:31:32.017275   44139 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0912 22:31:32.017285   44139 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0912 22:31:32.017294   44139 command_runner.go:130] > # live configuration reload.
	I0912 22:31:32.017300   44139 command_runner.go:130] > # log_level = "info"
	I0912 22:31:32.017311   44139 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0912 22:31:32.017321   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.017327   44139 command_runner.go:130] > # log_filter = ""
	I0912 22:31:32.017339   44139 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0912 22:31:32.017353   44139 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0912 22:31:32.017362   44139 command_runner.go:130] > # separated by comma.
	I0912 22:31:32.017373   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017383   44139 command_runner.go:130] > # uid_mappings = ""
	I0912 22:31:32.017392   44139 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0912 22:31:32.017404   44139 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0912 22:31:32.017413   44139 command_runner.go:130] > # separated by comma.
	I0912 22:31:32.017420   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017427   44139 command_runner.go:130] > # gid_mappings = ""
	I0912 22:31:32.017433   44139 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0912 22:31:32.017441   44139 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:31:32.017447   44139 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:31:32.017456   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017462   44139 command_runner.go:130] > # minimum_mappable_uid = -1
	I0912 22:31:32.017470   44139 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0912 22:31:32.017476   44139 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:31:32.017483   44139 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:31:32.017493   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017499   44139 command_runner.go:130] > # minimum_mappable_gid = -1
	I0912 22:31:32.017506   44139 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0912 22:31:32.017518   44139 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0912 22:31:32.017526   44139 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0912 22:31:32.017531   44139 command_runner.go:130] > # ctr_stop_timeout = 30
	I0912 22:31:32.017539   44139 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0912 22:31:32.017545   44139 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0912 22:31:32.017549   44139 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0912 22:31:32.017557   44139 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0912 22:31:32.017562   44139 command_runner.go:130] > drop_infra_ctr = false
	I0912 22:31:32.017567   44139 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0912 22:31:32.017575   44139 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0912 22:31:32.017582   44139 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0912 22:31:32.017587   44139 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0912 22:31:32.017594   44139 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0912 22:31:32.017601   44139 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0912 22:31:32.017607   44139 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0912 22:31:32.017622   44139 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0912 22:31:32.017626   44139 command_runner.go:130] > # shared_cpuset = ""
	I0912 22:31:32.017632   44139 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0912 22:31:32.017639   44139 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0912 22:31:32.017644   44139 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0912 22:31:32.017653   44139 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0912 22:31:32.017659   44139 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0912 22:31:32.017665   44139 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0912 22:31:32.017685   44139 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0912 22:31:32.017695   44139 command_runner.go:130] > # enable_criu_support = false
	I0912 22:31:32.017701   44139 command_runner.go:130] > # Enable/disable the generation of the container,
	I0912 22:31:32.017707   44139 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0912 22:31:32.017711   44139 command_runner.go:130] > # enable_pod_events = false
	I0912 22:31:32.017717   44139 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:31:32.017725   44139 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:31:32.017731   44139 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0912 22:31:32.017737   44139 command_runner.go:130] > # default_runtime = "runc"
	I0912 22:31:32.017742   44139 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0912 22:31:32.017751   44139 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0912 22:31:32.017762   44139 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0912 22:31:32.017770   44139 command_runner.go:130] > # creation as a file is not desired either.
	I0912 22:31:32.017777   44139 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0912 22:31:32.017785   44139 command_runner.go:130] > # the hostname is being managed dynamically.
	I0912 22:31:32.017790   44139 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0912 22:31:32.017795   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017801   44139 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0912 22:31:32.017809   44139 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0912 22:31:32.017815   44139 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0912 22:31:32.017822   44139 command_runner.go:130] > # Each entry in the table should follow the format:
	I0912 22:31:32.017826   44139 command_runner.go:130] > #
	I0912 22:31:32.017831   44139 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0912 22:31:32.017838   44139 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0912 22:31:32.017885   44139 command_runner.go:130] > # runtime_type = "oci"
	I0912 22:31:32.017893   44139 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0912 22:31:32.017897   44139 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0912 22:31:32.017901   44139 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0912 22:31:32.017905   44139 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0912 22:31:32.017909   44139 command_runner.go:130] > # monitor_env = []
	I0912 22:31:32.017914   44139 command_runner.go:130] > # privileged_without_host_devices = false
	I0912 22:31:32.017918   44139 command_runner.go:130] > # allowed_annotations = []
	I0912 22:31:32.017922   44139 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0912 22:31:32.017928   44139 command_runner.go:130] > # Where:
	I0912 22:31:32.017933   44139 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0912 22:31:32.017941   44139 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0912 22:31:32.017947   44139 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0912 22:31:32.017956   44139 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0912 22:31:32.017960   44139 command_runner.go:130] > #   in $PATH.
	I0912 22:31:32.017966   44139 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0912 22:31:32.017973   44139 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0912 22:31:32.017983   44139 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0912 22:31:32.017988   44139 command_runner.go:130] > #   state.
	I0912 22:31:32.017994   44139 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0912 22:31:32.018002   44139 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0912 22:31:32.018008   44139 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0912 22:31:32.018017   44139 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0912 22:31:32.018023   44139 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0912 22:31:32.018032   44139 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0912 22:31:32.018036   44139 command_runner.go:130] > #   The currently recognized values are:
	I0912 22:31:32.018044   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0912 22:31:32.018052   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0912 22:31:32.018059   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0912 22:31:32.018067   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0912 22:31:32.018076   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0912 22:31:32.018083   44139 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0912 22:31:32.018091   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0912 22:31:32.018097   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0912 22:31:32.018105   44139 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0912 22:31:32.018111   44139 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0912 22:31:32.018118   44139 command_runner.go:130] > #   deprecated option "conmon".
	I0912 22:31:32.018124   44139 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0912 22:31:32.018131   44139 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0912 22:31:32.018137   44139 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0912 22:31:32.018142   44139 command_runner.go:130] > #   should be moved to the container's cgroup
	I0912 22:31:32.018150   44139 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0912 22:31:32.018155   44139 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0912 22:31:32.018163   44139 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0912 22:31:32.018168   44139 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0912 22:31:32.018172   44139 command_runner.go:130] > #
	I0912 22:31:32.018177   44139 command_runner.go:130] > # Using the seccomp notifier feature:
	I0912 22:31:32.018182   44139 command_runner.go:130] > #
	I0912 22:31:32.018188   44139 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0912 22:31:32.018195   44139 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0912 22:31:32.018200   44139 command_runner.go:130] > #
	I0912 22:31:32.018206   44139 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0912 22:31:32.018214   44139 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0912 22:31:32.018217   44139 command_runner.go:130] > #
	I0912 22:31:32.018223   44139 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0912 22:31:32.018228   44139 command_runner.go:130] > # feature.
	I0912 22:31:32.018231   44139 command_runner.go:130] > #
	I0912 22:31:32.018237   44139 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0912 22:31:32.018246   44139 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0912 22:31:32.018252   44139 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0912 22:31:32.018259   44139 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0912 22:31:32.018267   44139 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0912 22:31:32.018270   44139 command_runner.go:130] > #
	I0912 22:31:32.018277   44139 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0912 22:31:32.018284   44139 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0912 22:31:32.018287   44139 command_runner.go:130] > #
	I0912 22:31:32.018293   44139 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0912 22:31:32.018301   44139 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0912 22:31:32.018304   44139 command_runner.go:130] > #
	I0912 22:31:32.018310   44139 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0912 22:31:32.018316   44139 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0912 22:31:32.018319   44139 command_runner.go:130] > # limitation.
	I0912 22:31:32.018325   44139 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0912 22:31:32.018332   44139 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0912 22:31:32.018336   44139 command_runner.go:130] > runtime_type = "oci"
	I0912 22:31:32.018340   44139 command_runner.go:130] > runtime_root = "/run/runc"
	I0912 22:31:32.018344   44139 command_runner.go:130] > runtime_config_path = ""
	I0912 22:31:32.018349   44139 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0912 22:31:32.018353   44139 command_runner.go:130] > monitor_cgroup = "pod"
	I0912 22:31:32.018357   44139 command_runner.go:130] > monitor_exec_cgroup = ""
	I0912 22:31:32.018363   44139 command_runner.go:130] > monitor_env = [
	I0912 22:31:32.018369   44139 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0912 22:31:32.018373   44139 command_runner.go:130] > ]
	I0912 22:31:32.018378   44139 command_runner.go:130] > privileged_without_host_devices = false
	I0912 22:31:32.018386   44139 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0912 22:31:32.018391   44139 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0912 22:31:32.018397   44139 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0912 22:31:32.018407   44139 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0912 22:31:32.018414   44139 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0912 22:31:32.018422   44139 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0912 22:31:32.018430   44139 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0912 22:31:32.018439   44139 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0912 22:31:32.018445   44139 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0912 22:31:32.018452   44139 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0912 22:31:32.018459   44139 command_runner.go:130] > # Example:
	I0912 22:31:32.018463   44139 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0912 22:31:32.018470   44139 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0912 22:31:32.018475   44139 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0912 22:31:32.018479   44139 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0912 22:31:32.018482   44139 command_runner.go:130] > # cpuset = 0
	I0912 22:31:32.018486   44139 command_runner.go:130] > # cpushares = "0-1"
	I0912 22:31:32.018490   44139 command_runner.go:130] > # Where:
	I0912 22:31:32.018495   44139 command_runner.go:130] > # The workload name is workload-type.
	I0912 22:31:32.018503   44139 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0912 22:31:32.018508   44139 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0912 22:31:32.018517   44139 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0912 22:31:32.018525   44139 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0912 22:31:32.018533   44139 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0912 22:31:32.018538   44139 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0912 22:31:32.018545   44139 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0912 22:31:32.018552   44139 command_runner.go:130] > # Default value is set to true
	I0912 22:31:32.018556   44139 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0912 22:31:32.018561   44139 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0912 22:31:32.018568   44139 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0912 22:31:32.018573   44139 command_runner.go:130] > # Default value is set to 'false'
	I0912 22:31:32.018579   44139 command_runner.go:130] > # disable_hostport_mapping = false
	I0912 22:31:32.018586   44139 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0912 22:31:32.018589   44139 command_runner.go:130] > #
	I0912 22:31:32.018594   44139 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0912 22:31:32.018600   44139 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0912 22:31:32.018605   44139 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0912 22:31:32.018611   44139 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0912 22:31:32.018617   44139 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0912 22:31:32.018621   44139 command_runner.go:130] > [crio.image]
	I0912 22:31:32.018627   44139 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0912 22:31:32.018631   44139 command_runner.go:130] > # default_transport = "docker://"
	I0912 22:31:32.018637   44139 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0912 22:31:32.018643   44139 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:31:32.018646   44139 command_runner.go:130] > # global_auth_file = ""
	I0912 22:31:32.018651   44139 command_runner.go:130] > # The image used to instantiate infra containers.
	I0912 22:31:32.018656   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.018661   44139 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0912 22:31:32.018667   44139 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0912 22:31:32.018672   44139 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:31:32.018677   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.018681   44139 command_runner.go:130] > # pause_image_auth_file = ""
	I0912 22:31:32.018686   44139 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0912 22:31:32.018692   44139 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0912 22:31:32.018697   44139 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0912 22:31:32.018702   44139 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0912 22:31:32.018707   44139 command_runner.go:130] > # pause_command = "/pause"
	I0912 22:31:32.018712   44139 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0912 22:31:32.018717   44139 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0912 22:31:32.018722   44139 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0912 22:31:32.018729   44139 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0912 22:31:32.018734   44139 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0912 22:31:32.018740   44139 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0912 22:31:32.018744   44139 command_runner.go:130] > # pinned_images = [
	I0912 22:31:32.018747   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018753   44139 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0912 22:31:32.018759   44139 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0912 22:31:32.018764   44139 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0912 22:31:32.018772   44139 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0912 22:31:32.018777   44139 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0912 22:31:32.018781   44139 command_runner.go:130] > # signature_policy = ""
	I0912 22:31:32.018786   44139 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0912 22:31:32.018792   44139 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0912 22:31:32.018798   44139 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0912 22:31:32.018803   44139 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0912 22:31:32.018809   44139 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0912 22:31:32.018816   44139 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0912 22:31:32.018822   44139 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0912 22:31:32.018829   44139 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0912 22:31:32.018834   44139 command_runner.go:130] > # changing them here.
	I0912 22:31:32.018838   44139 command_runner.go:130] > # insecure_registries = [
	I0912 22:31:32.018841   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018848   44139 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0912 22:31:32.018856   44139 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0912 22:31:32.018860   44139 command_runner.go:130] > # image_volumes = "mkdir"
	I0912 22:31:32.018865   44139 command_runner.go:130] > # Temporary directory to use for storing big files
	I0912 22:31:32.018871   44139 command_runner.go:130] > # big_files_temporary_dir = ""
	I0912 22:31:32.018876   44139 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0912 22:31:32.018880   44139 command_runner.go:130] > # CNI plugins.
	I0912 22:31:32.018884   44139 command_runner.go:130] > [crio.network]
	I0912 22:31:32.018889   44139 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0912 22:31:32.018897   44139 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0912 22:31:32.018901   44139 command_runner.go:130] > # cni_default_network = ""
	I0912 22:31:32.018908   44139 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0912 22:31:32.018913   44139 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0912 22:31:32.018920   44139 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0912 22:31:32.018925   44139 command_runner.go:130] > # plugin_dirs = [
	I0912 22:31:32.018931   44139 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0912 22:31:32.018935   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018940   44139 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0912 22:31:32.018946   44139 command_runner.go:130] > [crio.metrics]
	I0912 22:31:32.018950   44139 command_runner.go:130] > # Globally enable or disable metrics support.
	I0912 22:31:32.018955   44139 command_runner.go:130] > enable_metrics = true
	I0912 22:31:32.018959   44139 command_runner.go:130] > # Specify enabled metrics collectors.
	I0912 22:31:32.018965   44139 command_runner.go:130] > # Per default all metrics are enabled.
	I0912 22:31:32.018971   44139 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0912 22:31:32.018980   44139 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0912 22:31:32.018985   44139 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0912 22:31:32.018991   44139 command_runner.go:130] > # metrics_collectors = [
	I0912 22:31:32.018995   44139 command_runner.go:130] > # 	"operations",
	I0912 22:31:32.019002   44139 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0912 22:31:32.019006   44139 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0912 22:31:32.019014   44139 command_runner.go:130] > # 	"operations_errors",
	I0912 22:31:32.019018   44139 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0912 22:31:32.019021   44139 command_runner.go:130] > # 	"image_pulls_by_name",
	I0912 22:31:32.019026   44139 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0912 22:31:32.019030   44139 command_runner.go:130] > # 	"image_pulls_failures",
	I0912 22:31:32.019034   44139 command_runner.go:130] > # 	"image_pulls_successes",
	I0912 22:31:32.019040   44139 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0912 22:31:32.019046   44139 command_runner.go:130] > # 	"image_layer_reuse",
	I0912 22:31:32.019051   44139 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0912 22:31:32.019056   44139 command_runner.go:130] > # 	"containers_oom_total",
	I0912 22:31:32.019060   44139 command_runner.go:130] > # 	"containers_oom",
	I0912 22:31:32.019066   44139 command_runner.go:130] > # 	"processes_defunct",
	I0912 22:31:32.019071   44139 command_runner.go:130] > # 	"operations_total",
	I0912 22:31:32.019075   44139 command_runner.go:130] > # 	"operations_latency_seconds",
	I0912 22:31:32.019081   44139 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0912 22:31:32.019086   44139 command_runner.go:130] > # 	"operations_errors_total",
	I0912 22:31:32.019092   44139 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0912 22:31:32.019096   44139 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0912 22:31:32.019101   44139 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0912 22:31:32.019105   44139 command_runner.go:130] > # 	"image_pulls_success_total",
	I0912 22:31:32.019109   44139 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0912 22:31:32.019113   44139 command_runner.go:130] > # 	"containers_oom_count_total",
	I0912 22:31:32.019118   44139 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0912 22:31:32.019124   44139 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0912 22:31:32.019128   44139 command_runner.go:130] > # ]
	I0912 22:31:32.019132   44139 command_runner.go:130] > # The port on which the metrics server will listen.
	I0912 22:31:32.019137   44139 command_runner.go:130] > # metrics_port = 9090
	I0912 22:31:32.019142   44139 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0912 22:31:32.019148   44139 command_runner.go:130] > # metrics_socket = ""
	I0912 22:31:32.019153   44139 command_runner.go:130] > # The certificate for the secure metrics server.
	I0912 22:31:32.019161   44139 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0912 22:31:32.019167   44139 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0912 22:31:32.019173   44139 command_runner.go:130] > # certificate on any modification event.
	I0912 22:31:32.019177   44139 command_runner.go:130] > # metrics_cert = ""
	I0912 22:31:32.019184   44139 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0912 22:31:32.019189   44139 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0912 22:31:32.019193   44139 command_runner.go:130] > # metrics_key = ""
	I0912 22:31:32.019198   44139 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0912 22:31:32.019204   44139 command_runner.go:130] > [crio.tracing]
	I0912 22:31:32.019210   44139 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0912 22:31:32.019215   44139 command_runner.go:130] > # enable_tracing = false
	I0912 22:31:32.019220   44139 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0912 22:31:32.019231   44139 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0912 22:31:32.019238   44139 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0912 22:31:32.019246   44139 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0912 22:31:32.019250   44139 command_runner.go:130] > # CRI-O NRI configuration.
	I0912 22:31:32.019256   44139 command_runner.go:130] > [crio.nri]
	I0912 22:31:32.019260   44139 command_runner.go:130] > # Globally enable or disable NRI.
	I0912 22:31:32.019267   44139 command_runner.go:130] > # enable_nri = false
	I0912 22:31:32.019272   44139 command_runner.go:130] > # NRI socket to listen on.
	I0912 22:31:32.019276   44139 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0912 22:31:32.019281   44139 command_runner.go:130] > # NRI plugin directory to use.
	I0912 22:31:32.019288   44139 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0912 22:31:32.019293   44139 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0912 22:31:32.019299   44139 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0912 22:31:32.019305   44139 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0912 22:31:32.019311   44139 command_runner.go:130] > # nri_disable_connections = false
	I0912 22:31:32.019316   44139 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0912 22:31:32.019324   44139 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0912 22:31:32.019329   44139 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0912 22:31:32.019335   44139 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0912 22:31:32.019341   44139 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0912 22:31:32.019347   44139 command_runner.go:130] > [crio.stats]
	I0912 22:31:32.019352   44139 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0912 22:31:32.019359   44139 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0912 22:31:32.019363   44139 command_runner.go:130] > # stats_collection_period = 0
	I0912 22:31:32.019468   44139 cni.go:84] Creating CNI manager for ""
	I0912 22:31:32.019478   44139 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0912 22:31:32.019485   44139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:31:32.019503   44139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-768483 NodeName:multinode-768483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:31:32.019637   44139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-768483"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:31:32.019692   44139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:31:32.029344   44139 command_runner.go:130] > kubeadm
	I0912 22:31:32.029363   44139 command_runner.go:130] > kubectl
	I0912 22:31:32.029368   44139 command_runner.go:130] > kubelet
	I0912 22:31:32.029399   44139 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:31:32.029448   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:31:32.038147   44139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0912 22:31:32.053371   44139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:31:32.069237   44139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0912 22:31:32.085783   44139 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I0912 22:31:32.089428   44139 command_runner.go:130] > 192.168.39.28	control-plane.minikube.internal
	I0912 22:31:32.089534   44139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:31:32.233049   44139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:31:32.248312   44139 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483 for IP: 192.168.39.28
	I0912 22:31:32.248336   44139 certs.go:194] generating shared ca certs ...
	I0912 22:31:32.248360   44139 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:31:32.248532   44139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:31:32.248595   44139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:31:32.248607   44139 certs.go:256] generating profile certs ...
	I0912 22:31:32.248701   44139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/client.key
	I0912 22:31:32.248798   44139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key.832235e5
	I0912 22:31:32.248853   44139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key
	I0912 22:31:32.248867   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:31:32.248880   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:31:32.248895   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:31:32.248908   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:31:32.248918   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 22:31:32.248931   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 22:31:32.248943   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 22:31:32.248955   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 22:31:32.249002   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:31:32.249030   44139 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:31:32.249039   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:31:32.249062   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:31:32.249086   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:31:32.249112   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:31:32.249162   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:31:32.249192   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.249205   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.249218   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.249842   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:31:32.272501   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:31:32.294737   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:31:32.317199   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:31:32.340117   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 22:31:32.361857   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:31:32.384027   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:31:32.407198   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:31:32.429914   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:31:32.451815   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:31:32.473951   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:31:32.495404   44139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:31:32.510703   44139 ssh_runner.go:195] Run: openssl version
	I0912 22:31:32.516096   44139 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0912 22:31:32.516195   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:31:32.526140   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530111   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530201   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530246   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.535219   44139 command_runner.go:130] > 51391683
	I0912 22:31:32.535312   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:31:32.544361   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:31:32.555352   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559360   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559401   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559447   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.564696   44139 command_runner.go:130] > 3ec20f2e
	I0912 22:31:32.564785   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:31:32.574117   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:31:32.584589   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588793   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588826   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588872   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.594011   44139 command_runner.go:130] > b5213941
	I0912 22:31:32.594170   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:31:32.603511   44139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:31:32.607489   44139 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:31:32.607513   44139 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0912 22:31:32.607521   44139 command_runner.go:130] > Device: 253,1	Inode: 4195880     Links: 1
	I0912 22:31:32.607530   44139 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:31:32.607541   44139 command_runner.go:130] > Access: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607549   44139 command_runner.go:130] > Modify: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607561   44139 command_runner.go:130] > Change: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607576   44139 command_runner.go:130] >  Birth: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607717   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:31:32.612924   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.613042   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:31:32.618236   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.618302   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:31:32.623484   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.623543   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:31:32.628627   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.628783   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:31:32.634124   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.634182   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:31:32.639789   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.639858   44139 kubeadm.go:392] StartCluster: {Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:31:32.639954   44139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:31:32.639996   44139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:31:32.674400   44139 command_runner.go:130] > 839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b
	I0912 22:31:32.674422   44139 command_runner.go:130] > e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd
	I0912 22:31:32.674429   44139 command_runner.go:130] > 804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c
	I0912 22:31:32.674453   44139 command_runner.go:130] > 843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679
	I0912 22:31:32.674459   44139 command_runner.go:130] > 6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693
	I0912 22:31:32.674465   44139 command_runner.go:130] > f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510
	I0912 22:31:32.674470   44139 command_runner.go:130] > c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b
	I0912 22:31:32.674479   44139 command_runner.go:130] > f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943
	I0912 22:31:32.675873   44139 cri.go:89] found id: "839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b"
	I0912 22:31:32.675889   44139 cri.go:89] found id: "e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd"
	I0912 22:31:32.675892   44139 cri.go:89] found id: "804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c"
	I0912 22:31:32.675895   44139 cri.go:89] found id: "843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679"
	I0912 22:31:32.675898   44139 cri.go:89] found id: "6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693"
	I0912 22:31:32.675901   44139 cri.go:89] found id: "f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510"
	I0912 22:31:32.675904   44139 cri.go:89] found id: "c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b"
	I0912 22:31:32.675906   44139 cri.go:89] found id: "f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943"
	I0912 22:31:32.675908   44139 cri.go:89] found id: ""
	I0912 22:31:32.675947   44139 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.874097937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180398874073439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04a7cff5-cda7-45f4-8229-a052092e1b1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.874823298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e980831-881e-435b-a9b3-d21a67ed2a17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.874933180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e980831-881e-435b-a9b3-d21a67ed2a17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.875355882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e980831-881e-435b-a9b3-d21a67ed2a17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.916333434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d04de36-6e5c-4cff-962c-936035204fd5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.916411211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d04de36-6e5c-4cff-962c-936035204fd5 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.917764477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b12024c-187f-4663-b449-5a311c244bee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.918182699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180398918157648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b12024c-187f-4663-b449-5a311c244bee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.919140113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcc8a970-0bef-4ff4-ba97-1cd067fa105b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.919197589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcc8a970-0bef-4ff4-ba97-1cd067fa105b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.919538855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcc8a970-0bef-4ff4-ba97-1cd067fa105b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.959118305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a37851b3-1da4-48c1-926f-612e0e4cd8c0 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.959191519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a37851b3-1da4-48c1-926f-612e0e4cd8c0 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.960205814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecdeb614-5da8-4aa2-b5ef-db977f9f4f7c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.960593122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180398960573569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecdeb614-5da8-4aa2-b5ef-db977f9f4f7c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.961249560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e00777f4-e00b-47cf-b4fd-12b1ef863fa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.961301861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e00777f4-e00b-47cf-b4fd-12b1ef863fa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:18 multinode-768483 crio[2713]: time="2024-09-12 22:33:18.961705672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e00777f4-e00b-47cf-b4fd-12b1ef863fa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.005076708Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c11f19d2-e519-4679-aa14-5d5a61d57343 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.005155104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c11f19d2-e519-4679-aa14-5d5a61d57343 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.006395300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76206546-b807-4932-8c45-f5314aea06bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.006853532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180399006829635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76206546-b807-4932-8c45-f5314aea06bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.007299071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f91aece1-79dc-44fb-8c4a-1bd8c2e49ecc name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.007390664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f91aece1-79dc-44fb-8c4a-1bd8c2e49ecc name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:33:19 multinode-768483 crio[2713]: time="2024-09-12 22:33:19.007846070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f91aece1-79dc-44fb-8c4a-1bd8c2e49ecc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	930d1eb00fcd2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   c6239dc721426       busybox-7dff88458-2jcd4
	aabf50e29ecef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   b603ca5480f2f       kindnet-tt4f9
	c17eceb01c7db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   756201f6d3b7a       coredns-7c65d6cfc9-w278g
	d62b93bbc679b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a00b3cfc40e62       storage-provisioner
	fc2e313e025e8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   016f43c033e89       kube-proxy-b2w9d
	2b127cdd9f72b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   7449a7ae76b79       kube-scheduler-multinode-768483
	f3473f46e4274       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   ff2bc6d006554       etcd-multinode-768483
	5d68e596667e0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   04fbbb040cb54       kube-apiserver-multinode-768483
	b83f7247201cd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   9bc9674b70411       kube-controller-manager-multinode-768483
	dbe4437099168       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   0d89618e7dc5c       busybox-7dff88458-2jcd4
	839ededeb42c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   04fac0aee67c0       coredns-7c65d6cfc9-w278g
	e7c17ba1a9c61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   66bc1a0adc24b       storage-provisioner
	804ba8843e877       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   b7e0e7dd96357       kube-proxy-b2w9d
	843730a4cdb96       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   1e9212d7a6491       kindnet-tt4f9
	6505c2c378ff7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   efb782701ae2b       etcd-multinode-768483
	f24ee99de69ee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   5846ebd5f084d       kube-controller-manager-multinode-768483
	c489f2027465c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   2e90396064c68       kube-apiserver-multinode-768483
	f0aae551b7315       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   0df074f42ec7d       kube-scheduler-multinode-768483
	
	
	==> coredns [839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b] <==
	[INFO] 10.244.1.2:59743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640569s
	[INFO] 10.244.1.2:32830 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105275s
	[INFO] 10.244.1.2:41988 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064553s
	[INFO] 10.244.1.2:55407 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001061251s
	[INFO] 10.244.1.2:48895 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123349s
	[INFO] 10.244.1.2:50858 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062737s
	[INFO] 10.244.1.2:43375 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117987s
	[INFO] 10.244.0.3:48213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087894s
	[INFO] 10.244.0.3:34262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064363s
	[INFO] 10.244.0.3:35462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055507s
	[INFO] 10.244.0.3:42971 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038951s
	[INFO] 10.244.1.2:41497 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115935s
	[INFO] 10.244.1.2:48860 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079415s
	[INFO] 10.244.1.2:46246 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067749s
	[INFO] 10.244.1.2:45271 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113036s
	[INFO] 10.244.0.3:45433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201166s
	[INFO] 10.244.0.3:50895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020706s
	[INFO] 10.244.0.3:41793 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193207s
	[INFO] 10.244.0.3:57569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00009896s
	[INFO] 10.244.1.2:55627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318814s
	[INFO] 10.244.1.2:55647 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118089s
	[INFO] 10.244.1.2:45492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000235538s
	[INFO] 10.244.1.2:50200 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128568s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48371 - 29527 "HINFO IN 8955004022018942478.7541837519683124185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009838992s
	
	
	==> describe nodes <==
	Name:               multinode-768483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-768483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=multinode-768483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_24_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:24:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-768483
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    multinode-768483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3335155d761542d493be0a366578b8a5
	  System UUID:                3335155d-7615-42d4-93be-0a366578b8a5
	  Boot ID:                    fb2d6d38-d168-4770-8b0c-5984543b5d6d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2jcd4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 coredns-7c65d6cfc9-w278g                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m23s
	  kube-system                 etcd-multinode-768483                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-tt4f9                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-768483             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-controller-manager-multinode-768483    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-b2w9d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-768483             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m21s                kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m28s                kubelet          Node multinode-768483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                kubelet          Node multinode-768483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                kubelet          Node multinode-768483 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m28s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m24s                node-controller  Node multinode-768483 event: Registered Node multinode-768483 in Controller
	  Normal  NodeReady                8m11s                kubelet          Node multinode-768483 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-768483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-768483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-768483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-768483 event: Registered Node multinode-768483 in Controller
	
	
	Name:               multinode-768483-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-768483-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=multinode-768483
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T22_32_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:32:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-768483-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:33:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:32:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    multinode-768483-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 603d35bbfc7d4cebbc8046ff6b53473e
	  System UUID:                603d35bb-fc7d-4ceb-bc80-46ff6b53473e
	  Boot ID:                    dffccd93-f8ba-4a53-a12f-fc6950a8098a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5ssl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-x4s75              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-75v26           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m30s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-768483-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-768483-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-768483-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-768483-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-768483-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-768483-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-768483-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-768483-m02 status is now: NodeReady
	
	
	Name:               multinode-768483-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-768483-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=multinode-768483
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T22_32_58_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:32:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-768483-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:33:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:33:16 +0000   Thu, 12 Sep 2024 22:32:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:33:16 +0000   Thu, 12 Sep 2024 22:32:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:33:16 +0000   Thu, 12 Sep 2024 22:32:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:33:16 +0000   Thu, 12 Sep 2024 22:33:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    multinode-768483-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 68ac3806ab2f4002af12786fe3ed0428
	  System UUID:                68ac3806-ab2f-4002-af12-786fe3ed0428
	  Boot ID:                    8ddd89e9-962d-48f3-90bf-494a525517db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zmnq6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m41s
	  kube-system                 kube-proxy-2p9pp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m42s)  kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m42s)  kubelet          Node multinode-768483-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m42s)  kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m22s                  kubelet          Node multinode-768483-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet          Node multinode-768483-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m34s                  kubelet          Node multinode-768483-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-768483-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-768483-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-768483-m03 event: Registered Node multinode-768483-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-768483-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053585] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.180018] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.117938] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.263596] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.773036] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.385091] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.057611] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.980668] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.086782] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.076054] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.128788] kauditd_printk_skb: 18 callbacks suppressed
	[Sep12 22:25] kauditd_printk_skb: 69 callbacks suppressed
	[Sep12 22:26] kauditd_printk_skb: 14 callbacks suppressed
	[Sep12 22:31] systemd-fstab-generator[2636]: Ignoring "noauto" option for root device
	[  +0.159317] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +0.174713] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.139241] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.268406] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +8.777049] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.081529] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.699709] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +4.623363] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.193181] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.724662] systemd-fstab-generator[3776]: Ignoring "noauto" option for root device
	[Sep12 22:32] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693] <==
	{"level":"warn","ts":"2024-09-12T22:25:45.765935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.058737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-768483-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-12T22:25:45.766018Z","caller":"traceutil/trace.go:171","msg":"trace[161465610] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"437.665438ms","start":"2024-09-12T22:25:45.328339Z","end":"2024-09-12T22:25:45.766005Z","steps":["trace[161465610] 'process raft request'  (duration: 179.937854ms)","trace[161465610] 'compare'  (duration: 256.629504ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T22:25:45.765736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.460896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-x4s75\" ","response":"range_response_count:1 size:3703"}
	{"level":"info","ts":"2024-09-12T22:25:45.768808Z","caller":"traceutil/trace.go:171","msg":"trace[1693125820] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-x4s75; range_end:; response_count:1; response_revision:465; }","duration":"335.517706ms","start":"2024-09-12T22:25:45.433265Z","end":"2024-09-12T22:25:45.768783Z","steps":["trace[1693125820] 'agreement among raft nodes before linearized reading'  (duration: 332.434065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:25:45.770730Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:25:45.433234Z","time spent":"337.47928ms","remote":"127.0.0.1:57390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3726,"request content":"key:\"/registry/pods/kube-system/kindnet-x4s75\" "}
	{"level":"info","ts":"2024-09-12T22:25:45.768889Z","caller":"traceutil/trace.go:171","msg":"trace[2023500463] range","detail":"{range_begin:/registry/minions/multinode-768483-m02; range_end:; response_count:1; response_revision:465; }","duration":"284.007218ms","start":"2024-09-12T22:25:45.484873Z","end":"2024-09-12T22:25:45.768881Z","steps":["trace[2023500463] 'agreement among raft nodes before linearized reading'  (duration: 281.045215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:25:45.769040Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:25:45.328321Z","time spent":"440.677208ms","remote":"127.0.0.1:57382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2879,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-768483-m02\" mod_revision:463 > success:<request_put:<key:\"/registry/minions/multinode-768483-m02\" value_size:2833 >> failure:<request_range:<key:\"/registry/minions/multinode-768483-m02\" > >"}
	{"level":"info","ts":"2024-09-12T22:25:45.910075Z","caller":"traceutil/trace.go:171","msg":"trace[1023654244] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"134.941312ms","start":"2024-09-12T22:25:45.775114Z","end":"2024-09-12T22:25:45.910055Z","steps":["trace[1023654244] 'process raft request'  (duration: 134.465985ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:25:45.910485Z","caller":"traceutil/trace.go:171","msg":"trace[1478652559] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"128.33284ms","start":"2024-09-12T22:25:45.782142Z","end":"2024-09-12T22:25:45.910475Z","steps":["trace[1478652559] 'process raft request'  (duration: 127.610679ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:25:52.689286Z","caller":"traceutil/trace.go:171","msg":"trace[1220903943] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"105.000507ms","start":"2024-09-12T22:25:52.584268Z","end":"2024-09-12T22:25:52.689269Z","steps":["trace[1220903943] 'process raft request'  (duration: 104.902744ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:26:37.836533Z","caller":"traceutil/trace.go:171","msg":"trace[370385528] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"127.60833ms","start":"2024-09-12T22:26:37.708897Z","end":"2024-09-12T22:26:37.836505Z","steps":["trace[370385528] 'read index received'  (duration: 44.20338ms)","trace[370385528] 'applied index is now lower than readState.Index'  (duration: 83.404334ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T22:26:37.836723Z","caller":"traceutil/trace.go:171","msg":"trace[1425116809] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"161.672594ms","start":"2024-09-12T22:26:37.675038Z","end":"2024-09-12T22:26:37.836711Z","steps":["trace[1425116809] 'process raft request'  (duration: 78.131529ms)","trace[1425116809] 'compare'  (duration: 83.220543ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T22:26:37.836932Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.999222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-768483-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T22:26:37.837006Z","caller":"traceutil/trace.go:171","msg":"trace[1703510972] range","detail":"{range_begin:/registry/minions/multinode-768483-m03; range_end:; response_count:0; response_revision:572; }","duration":"128.106787ms","start":"2024-09-12T22:26:37.708893Z","end":"2024-09-12T22:26:37.836999Z","steps":["trace[1703510972] 'agreement among raft nodes before linearized reading'  (duration: 127.954145ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:27:34.672142Z","caller":"traceutil/trace.go:171","msg":"trace[1773283161] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"200.403014ms","start":"2024-09-12T22:27:34.471712Z","end":"2024-09-12T22:27:34.672115Z","steps":["trace[1773283161] 'process raft request'  (duration: 199.977275ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:29:51.439017Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-12T22:29:51.439121Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-768483","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.28:2380"],"advertise-client-urls":["https://192.168.39.28:2379"]}
	{"level":"warn","ts":"2024-09-12T22:29:51.439205Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.439333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.523565Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.28:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.523639Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.28:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T22:29:51.523798Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2fa11d851b98b853","current-leader-member-id":"2fa11d851b98b853"}
	{"level":"info","ts":"2024-09-12T22:29:51.526265Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2024-09-12T22:29:51.526365Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2024-09-12T22:29:51.526389Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-768483","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.28:2380"],"advertise-client-urls":["https://192.168.39.28:2379"]}
	
	
	==> etcd [f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47] <==
	{"level":"info","ts":"2024-09-12T22:31:35.199039Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:31:35.199693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:31:35.185922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 switched to configuration voters=(3432056848563877971)"}
	{"level":"info","ts":"2024-09-12T22:31:35.185091Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-12T22:31:35.217072Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","added-peer-id":"2fa11d851b98b853","added-peer-peer-urls":["https://192.168.39.28:2380"]}
	{"level":"info","ts":"2024-09-12T22:31:35.217486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:31:35.226922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:31:36.245711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgPreVoteResp from 2fa11d851b98b853 at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became candidate at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgVoteResp from 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became leader at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2fa11d851b98b853 elected leader 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.250719Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2fa11d851b98b853","local-member-attributes":"{Name:multinode-768483 ClientURLs:[https://192.168.39.28:2379]}","request-path":"/0/members/2fa11d851b98b853/attributes","cluster-id":"8fc02aca6c76ee1e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T22:31:36.251433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:31:36.251700Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:31:36.256391Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:31:36.259442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T22:31:36.264332Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:31:36.267349Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.28:2379"}
	{"level":"info","ts":"2024-09-12T22:31:36.270689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T22:31:36.270729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T22:33:00.769928Z","caller":"traceutil/trace.go:171","msg":"trace[69052628] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"107.661581ms","start":"2024-09-12T22:33:00.662232Z","end":"2024-09-12T22:33:00.769893Z","steps":["trace[69052628] 'process raft request'  (duration: 107.543545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:33:04.444060Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.681872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13282120153224572176 > lease_revoke:<id:385391e85d01ec5e>","response":"size:28"}
	
	
	==> kernel <==
	 22:33:19 up 9 min,  0 users,  load average: 0.14, 0.18, 0.10
	Linux multinode-768483 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679] <==
	I0912 22:29:08.143424       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:18.141184       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:18.141230       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:18.141416       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:18.141442       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:18.141511       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:18.141532       1 main.go:299] handling current node
	I0912 22:29:28.142891       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:28.143056       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:28.143223       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:28.143247       1 main.go:299] handling current node
	I0912 22:29:28.143279       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:28.143296       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:38.139432       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:38.139569       1 main.go:299] handling current node
	I0912 22:29:38.139601       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:38.139620       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:38.139875       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:38.139923       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:48.142840       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:48.142910       1 main.go:299] handling current node
	I0912 22:29:48.142941       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:48.142947       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:48.143077       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:48.143082       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba] <==
	I0912 22:32:29.738102       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:32:39.738587       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:32:39.738715       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:32:39.738899       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:32:39.738939       1 main.go:299] handling current node
	I0912 22:32:39.738984       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:32:39.738995       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:32:49.737919       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:32:49.737996       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:32:49.738234       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:32:49.738251       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:32:49.738404       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:32:49.738434       1 main.go:299] handling current node
	I0912 22:32:59.737830       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:32:59.737971       1 main.go:299] handling current node
	I0912 22:32:59.738008       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:32:59.738072       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:32:59.738260       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:32:59.738311       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.2.0/24] 
	I0912 22:33:09.737774       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:33:09.737951       1 main.go:299] handling current node
	I0912 22:33:09.738026       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:33:09.738051       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:33:09.738221       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:33:09.738274       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97] <==
	I0912 22:31:37.825526       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0912 22:31:37.831960       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:31:37.831988       1 policy_source.go:224] refreshing policies
	I0912 22:31:37.840528       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:31:37.840745       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:31:37.845040       1 shared_informer.go:320] Caches are synced for configmaps
	I0912 22:31:37.845779       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0912 22:31:37.845857       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0912 22:31:37.846443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0912 22:31:37.846497       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0912 22:31:37.855552       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0912 22:31:37.863921       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 22:31:37.864815       1 aggregator.go:171] initial CRD sync complete...
	I0912 22:31:37.864844       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 22:31:37.864851       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:31:37.864857       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:31:37.909215       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0912 22:31:38.754369       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:31:40.168129       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 22:31:40.283498       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 22:31:40.300972       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 22:31:40.381219       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:31:40.389987       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 22:31:41.356884       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 22:31:41.550822       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b] <==
	W0912 22:29:51.460816       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.460901       1 logging.go:55] [core] [Channel #6 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.460956       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461334       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461447       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461797       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461968       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463549       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463729       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463933       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464002       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464060       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464118       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464174       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464223       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464270       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464324       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464438       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466118       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466465       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466870       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.467600       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.471952       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.472059       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.472131       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58] <==
	I0912 22:32:38.267944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:32:38.283810       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:32:38.289987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.838µs"
	I0912 22:32:38.303501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.628µs"
	I0912 22:32:41.433168       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:32:42.222846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.52763ms"
	I0912 22:32:42.224809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.433µs"
	I0912 22:32:49.480008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:32:56.028736       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:56.045887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:56.287464       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:56.288089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:32:57.435012       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-768483-m03\" does not exist"
	I0912 22:32:57.435118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:32:57.461434       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-768483-m03" podCIDRs=["10.244.2.0/24"]
	I0912 22:32:57.461472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:57.461495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:57.826157       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:58.181798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:01.539009       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:07.819145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.114723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.114839       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:33:16.125086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.451322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	
	
	==> kube-controller-manager [f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510] <==
	I0912 22:27:25.624163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:25.624284       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:27:26.658241       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-768483-m03\" does not exist"
	I0912 22:27:26.659052       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:27:26.674567       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-768483-m03" podCIDRs=["10.244.3.0/24"]
	I0912 22:27:26.674604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:26.675851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:26.682583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:27.137102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:27.487838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:30.438446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:36.906360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:46.008915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:46.009103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m03"
	I0912 22:27:46.021511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:50.374248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.390516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.392299       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:28:30.394553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:30.416355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:30.416538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.462398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.563805ms"
	I0912 22:28:30.462471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.841µs"
	I0912 22:28:35.538583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:45.615696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	
	
	==> kube-proxy [804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:24:57.879896       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:24:57.895620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.28"]
	E0912 22:24:57.895785       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:24:57.926877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:24:57.926923       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:24:57.926946       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:24:57.929564       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:24:57.929955       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:24:57.930012       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:24:57.931358       1 config.go:199] "Starting service config controller"
	I0912 22:24:57.931417       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:24:57.931460       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:24:57.931477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:24:57.936383       1 config.go:328] "Starting node config controller"
	I0912 22:24:57.936409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:24:58.032486       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:24:58.032622       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:24:58.036936       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:31:38.987278       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:31:39.008402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.28"]
	E0912 22:31:39.008488       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:31:39.072632       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:31:39.072761       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:31:39.072789       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:31:39.076346       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:31:39.076617       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:31:39.076694       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:31:39.078392       1 config.go:199] "Starting service config controller"
	I0912 22:31:39.078438       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:31:39.078479       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:31:39.078484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:31:39.079359       1 config.go:328] "Starting node config controller"
	I0912 22:31:39.079383       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:31:39.179300       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:31:39.179364       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:31:39.179616       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873] <==
	I0912 22:31:36.026564       1 serving.go:386] Generated self-signed cert in-memory
	W0912 22:31:37.806879       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 22:31:37.806960       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 22:31:37.806970       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 22:31:37.806985       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 22:31:37.853581       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 22:31:37.853623       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:31:37.857409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 22:31:37.857570       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 22:31:37.857804       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 22:31:37.857902       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 22:31:37.958191       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943] <==
	E0912 22:24:48.620854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.620999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:24:48.621036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.621094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:24:48.621128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.621112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:24:48.621202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.543091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:24:49.543138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.654323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 22:24:49.654383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.660413       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:24:49.660456       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 22:24:49.778825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 22:24:49.778881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.876869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:24:49.876926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.879275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:24:49.882042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.882417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 22:24:49.882495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.918149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 22:24:49.918215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0912 22:24:52.215351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 22:29:51.435110       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 12 22:31:44 multinode-768483 kubelet[2930]: E0912 22:31:44.201394    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180304199681077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:31:45 multinode-768483 kubelet[2930]: I0912 22:31:45.685402    2930 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 12 22:31:54 multinode-768483 kubelet[2930]: E0912 22:31:54.203036    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180314202616369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:31:54 multinode-768483 kubelet[2930]: E0912 22:31:54.203075    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180314202616369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:04 multinode-768483 kubelet[2930]: E0912 22:32:04.204583    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180324204335426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:04 multinode-768483 kubelet[2930]: E0912 22:32:04.204623    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180324204335426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:14 multinode-768483 kubelet[2930]: E0912 22:32:14.207420    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180334206972069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:14 multinode-768483 kubelet[2930]: E0912 22:32:14.207526    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180334206972069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:24 multinode-768483 kubelet[2930]: E0912 22:32:24.210303    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180344209957338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:24 multinode-768483 kubelet[2930]: E0912 22:32:24.210360    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180344209957338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:34 multinode-768483 kubelet[2930]: E0912 22:32:34.158385    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:32:34 multinode-768483 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:32:34 multinode-768483 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:32:34 multinode-768483 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:32:34 multinode-768483 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:32:34 multinode-768483 kubelet[2930]: E0912 22:32:34.213014    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180354211608962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:34 multinode-768483 kubelet[2930]: E0912 22:32:34.213168    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180354211608962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:44 multinode-768483 kubelet[2930]: E0912 22:32:44.214603    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180364214351122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:44 multinode-768483 kubelet[2930]: E0912 22:32:44.214638    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180364214351122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:54 multinode-768483 kubelet[2930]: E0912 22:32:54.216860    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180374216448966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:32:54 multinode-768483 kubelet[2930]: E0912 22:32:54.216905    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180374216448966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:33:04 multinode-768483 kubelet[2930]: E0912 22:33:04.219708    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180384218293878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:33:04 multinode-768483 kubelet[2930]: E0912 22:33:04.220034    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180384218293878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:33:14 multinode-768483 kubelet[2930]: E0912 22:33:14.221370    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180394221022605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:33:14 multinode-768483 kubelet[2930]: E0912 22:33:14.221407    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180394221022605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:33:18.605409   45293 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19616-5891/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-768483 -n multinode-768483
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-768483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 stop
E0912 22:35:05.704208   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-768483 stop: exit status 82 (2m0.464494385s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-768483-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-768483 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-768483 status: exit status 3 (18.706536972s)

                                                
                                                
-- stdout --
	multinode-768483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-768483-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:35:41.561985   45953 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0912 22:35:41.562024   45953 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-768483 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-768483 -n multinode-768483
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-768483 logs -n 25: (1.396812455s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483:/home/docker/cp-test_multinode-768483-m02_multinode-768483.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483 sudo cat                                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m02_multinode-768483.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03:/home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483-m03 sudo cat                                   | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp testdata/cp-test.txt                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483:/home/docker/cp-test_multinode-768483-m03_multinode-768483.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483 sudo cat                                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m03_multinode-768483.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt                       | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m02:/home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n                                                                 | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | multinode-768483-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-768483 ssh -n multinode-768483-m02 sudo cat                                   | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | /home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-768483 node stop m03                                                          | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	| node    | multinode-768483 node start                                                             | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC | 12 Sep 24 22:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-768483                                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC |                     |
	| stop    | -p multinode-768483                                                                     | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:27 UTC |                     |
	| start   | -p multinode-768483                                                                     | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-768483                                                                | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:33 UTC |                     |
	| node    | multinode-768483 node delete                                                            | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:33 UTC | 12 Sep 24 22:33 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-768483 stop                                                                   | multinode-768483 | jenkins | v1.34.0 | 12 Sep 24 22:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:29:50.429762   44139 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:29:50.429993   44139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:50.430001   44139 out.go:358] Setting ErrFile to fd 2...
	I0912 22:29:50.430005   44139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:50.430204   44139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:29:50.430741   44139 out.go:352] Setting JSON to false
	I0912 22:29:50.431633   44139 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4332,"bootTime":1726175858,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:29:50.431696   44139 start.go:139] virtualization: kvm guest
	I0912 22:29:50.434750   44139 out.go:177] * [multinode-768483] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:29:50.436223   44139 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:29:50.436224   44139 notify.go:220] Checking for updates...
	I0912 22:29:50.438708   44139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:29:50.440557   44139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:29:50.442044   44139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:29:50.443350   44139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:29:50.444575   44139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:29:50.446101   44139 config.go:182] Loaded profile config "multinode-768483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:29:50.446193   44139 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:29:50.446601   44139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:29:50.446656   44139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:29:50.461730   44139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0912 22:29:50.462158   44139 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:29:50.462620   44139 main.go:141] libmachine: Using API Version  1
	I0912 22:29:50.462638   44139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:29:50.462992   44139 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:29:50.463171   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.498694   44139 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:29:50.499889   44139 start.go:297] selected driver: kvm2
	I0912 22:29:50.499907   44139 start.go:901] validating driver "kvm2" against &{Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:50.500105   44139 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:29:50.500518   44139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:29:50.500606   44139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:29:50.515636   44139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:29:50.516288   44139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:29:50.516347   44139 cni.go:84] Creating CNI manager for ""
	I0912 22:29:50.516356   44139 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0912 22:29:50.516408   44139 start.go:340] cluster config:
	{Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:50.516515   44139 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:29:50.520807   44139 out.go:177] * Starting "multinode-768483" primary control-plane node in "multinode-768483" cluster
	I0912 22:29:50.524835   44139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:29:50.524888   44139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:29:50.524897   44139 cache.go:56] Caching tarball of preloaded images
	I0912 22:29:50.524983   44139 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:29:50.524994   44139 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 22:29:50.525119   44139 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/config.json ...
	I0912 22:29:50.525359   44139 start.go:360] acquireMachinesLock for multinode-768483: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:29:50.525413   44139 start.go:364] duration metric: took 25.593µs to acquireMachinesLock for "multinode-768483"
	I0912 22:29:50.525426   44139 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:29:50.525431   44139 fix.go:54] fixHost starting: 
	I0912 22:29:50.525742   44139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:29:50.525775   44139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:29:50.540264   44139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
	I0912 22:29:50.540652   44139 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:29:50.541085   44139 main.go:141] libmachine: Using API Version  1
	I0912 22:29:50.541107   44139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:29:50.541416   44139 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:29:50.541573   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.541738   44139 main.go:141] libmachine: (multinode-768483) Calling .GetState
	I0912 22:29:50.543765   44139 fix.go:112] recreateIfNeeded on multinode-768483: state=Running err=<nil>
	W0912 22:29:50.543799   44139 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 22:29:50.546441   44139 out.go:177] * Updating the running kvm2 "multinode-768483" VM ...
	I0912 22:29:50.547797   44139 machine.go:93] provisionDockerMachine start ...
	I0912 22:29:50.547816   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:29:50.548011   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.550826   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.551220   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.551262   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.551418   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.551563   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.551708   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.551828   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.551951   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.552185   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.552200   44139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:29:50.671150   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-768483
	
	I0912 22:29:50.671187   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.671471   44139 buildroot.go:166] provisioning hostname "multinode-768483"
	I0912 22:29:50.671501   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.671681   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.674468   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.675007   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.675039   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.675273   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.675549   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.675738   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.675926   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.676170   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.676343   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.676360   44139 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-768483 && echo "multinode-768483" | sudo tee /etc/hostname
	I0912 22:29:50.805530   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-768483
	
	I0912 22:29:50.805554   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.808566   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.808987   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.809013   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.809134   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:50.809314   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.809560   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:50.809702   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:50.809873   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:50.810129   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:50.810149   44139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-768483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-768483/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-768483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:29:50.922502   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:29:50.922532   44139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:29:50.922561   44139 buildroot.go:174] setting up certificates
	I0912 22:29:50.922573   44139 provision.go:84] configureAuth start
	I0912 22:29:50.922587   44139 main.go:141] libmachine: (multinode-768483) Calling .GetMachineName
	I0912 22:29:50.922864   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:29:50.925734   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.926104   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.926124   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.926288   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:50.928446   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.928761   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:50.928797   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:50.928903   44139 provision.go:143] copyHostCerts
	I0912 22:29:50.928938   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:29:50.928974   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:29:50.928990   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:29:50.929075   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:29:50.929243   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:29:50.929273   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:29:50.929282   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:29:50.929335   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:29:50.929402   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:29:50.929429   44139 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:29:50.929438   44139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:29:50.929474   44139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:29:50.929536   44139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.multinode-768483 san=[127.0.0.1 192.168.39.28 localhost minikube multinode-768483]
	I0912 22:29:51.144081   44139 provision.go:177] copyRemoteCerts
	I0912 22:29:51.144136   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:29:51.144158   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:51.146729   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.147085   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:51.147128   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.147245   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:51.147419   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.147564   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:51.147665   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:29:51.231700   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:29:51.231773   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:29:51.255442   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:29:51.255506   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 22:29:51.279989   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:29:51.280063   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:29:51.302250   44139 provision.go:87] duration metric: took 379.665576ms to configureAuth
	I0912 22:29:51.302275   44139 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:29:51.302498   44139 config.go:182] Loaded profile config "multinode-768483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:29:51.302559   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:29:51.305557   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.306105   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:29:51.306131   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:29:51.306317   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:29:51.306529   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.306711   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:29:51.306877   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:29:51.307042   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:29:51.307236   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:29:51.307254   44139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:31:21.951473   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:31:21.951499   44139 machine.go:96] duration metric: took 1m31.403688955s to provisionDockerMachine
	I0912 22:31:21.951522   44139 start.go:293] postStartSetup for "multinode-768483" (driver="kvm2")
	I0912 22:31:21.951533   44139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:31:21.951548   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:21.951849   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:31:21.951874   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:21.955323   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:21.955965   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:21.955991   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:21.956187   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:21.956423   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:21.956603   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:21.956788   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.046112   44139 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:31:22.050120   44139 command_runner.go:130] > NAME=Buildroot
	I0912 22:31:22.050137   44139 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0912 22:31:22.050144   44139 command_runner.go:130] > ID=buildroot
	I0912 22:31:22.050150   44139 command_runner.go:130] > VERSION_ID=2023.02.9
	I0912 22:31:22.050158   44139 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0912 22:31:22.050315   44139 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:31:22.050338   44139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:31:22.050412   44139 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:31:22.050492   44139 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:31:22.050509   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /etc/ssl/certs/130832.pem
	I0912 22:31:22.050605   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:31:22.061250   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:31:22.084881   44139 start.go:296] duration metric: took 133.324038ms for postStartSetup
	I0912 22:31:22.084929   44139 fix.go:56] duration metric: took 1m31.559496697s for fixHost
	I0912 22:31:22.084953   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.087609   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.087983   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.088008   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.088163   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.088382   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.088532   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.088646   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.088814   44139 main.go:141] libmachine: Using SSH client type: native
	I0912 22:31:22.088985   44139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0912 22:31:22.088996   44139 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:31:22.198081   44139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726180282.174502281
	
	I0912 22:31:22.198112   44139 fix.go:216] guest clock: 1726180282.174502281
	I0912 22:31:22.198124   44139 fix.go:229] Guest: 2024-09-12 22:31:22.174502281 +0000 UTC Remote: 2024-09-12 22:31:22.084933745 +0000 UTC m=+91.690259611 (delta=89.568536ms)
	I0912 22:31:22.198153   44139 fix.go:200] guest clock delta is within tolerance: 89.568536ms
	I0912 22:31:22.198165   44139 start.go:83] releasing machines lock for "multinode-768483", held for 1m31.67274177s
	I0912 22:31:22.198193   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.198478   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:31:22.201101   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.201455   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.201491   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.201636   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202340   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202481   44139 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:31:22.202552   44139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:31:22.202599   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.202713   44139 ssh_runner.go:195] Run: cat /version.json
	I0912 22:31:22.202747   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:31:22.205335   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205711   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.205741   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205894   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.205901   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.206098   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.206256   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.206391   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:22.206395   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.206418   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:22.206567   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:31:22.206715   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:31:22.206900   44139 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:31:22.207019   44139 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:31:22.323124   44139 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 22:31:22.323971   44139 command_runner.go:130] > {"iso_version": "v1.34.0-1726156389-19616", "kicbase_version": "v0.0.45-1725963390-19606", "minikube_version": "v1.34.0", "commit": "5022c44a3509464df545efc115fbb6c3f1b5e972"}
	I0912 22:31:22.324135   44139 ssh_runner.go:195] Run: systemctl --version
	I0912 22:31:22.329664   44139 command_runner.go:130] > systemd 252 (252)
	I0912 22:31:22.329692   44139 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0912 22:31:22.329888   44139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:31:22.493499   44139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:31:22.499028   44139 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0912 22:31:22.499134   44139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:31:22.499192   44139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:31:22.508504   44139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:31:22.508526   44139 start.go:495] detecting cgroup driver to use...
	I0912 22:31:22.508623   44139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:31:22.525119   44139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:31:22.538949   44139 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:31:22.539049   44139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:31:22.553483   44139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:31:22.568286   44139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:31:22.721088   44139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:31:22.866542   44139 docker.go:233] disabling docker service ...
	I0912 22:31:22.866607   44139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:31:22.889787   44139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:31:22.903862   44139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:31:23.045114   44139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:31:23.183642   44139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:31:23.197976   44139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:31:23.215227   44139 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0912 22:31:23.215272   44139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 22:31:23.215331   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.225423   44139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:31:23.225486   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.235791   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.245642   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.255728   44139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:31:23.266576   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.276598   44139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.286432   44139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:31:23.296427   44139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:31:23.305495   44139 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 22:31:23.305601   44139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:31:23.314684   44139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:31:23.449770   44139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:31:31.772339   44139 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.322537669s)
	I0912 22:31:31.772367   44139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:31:31.772413   44139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:31:31.777317   44139 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0912 22:31:31.777348   44139 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 22:31:31.777358   44139 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0912 22:31:31.777368   44139 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:31:31.777376   44139 command_runner.go:130] > Access: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777387   44139 command_runner.go:130] > Modify: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777395   44139 command_runner.go:130] > Change: 2024-09-12 22:31:31.645494151 +0000
	I0912 22:31:31.777400   44139 command_runner.go:130] >  Birth: -
	I0912 22:31:31.777420   44139 start.go:563] Will wait 60s for crictl version
	I0912 22:31:31.777463   44139 ssh_runner.go:195] Run: which crictl
	I0912 22:31:31.780962   44139 command_runner.go:130] > /usr/bin/crictl
	I0912 22:31:31.781015   44139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:31:31.816889   44139 command_runner.go:130] > Version:  0.1.0
	I0912 22:31:31.816912   44139 command_runner.go:130] > RuntimeName:  cri-o
	I0912 22:31:31.816918   44139 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0912 22:31:31.816925   44139 command_runner.go:130] > RuntimeApiVersion:  v1
	I0912 22:31:31.816992   44139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:31:31.817091   44139 ssh_runner.go:195] Run: crio --version
	I0912 22:31:31.843722   44139 command_runner.go:130] > crio version 1.29.1
	I0912 22:31:31.843743   44139 command_runner.go:130] > Version:        1.29.1
	I0912 22:31:31.843751   44139 command_runner.go:130] > GitCommit:      unknown
	I0912 22:31:31.843755   44139 command_runner.go:130] > GitCommitDate:  unknown
	I0912 22:31:31.843759   44139 command_runner.go:130] > GitTreeState:   clean
	I0912 22:31:31.843765   44139 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0912 22:31:31.843769   44139 command_runner.go:130] > GoVersion:      go1.21.6
	I0912 22:31:31.843773   44139 command_runner.go:130] > Compiler:       gc
	I0912 22:31:31.843777   44139 command_runner.go:130] > Platform:       linux/amd64
	I0912 22:31:31.843787   44139 command_runner.go:130] > Linkmode:       dynamic
	I0912 22:31:31.843800   44139 command_runner.go:130] > BuildTags:      
	I0912 22:31:31.843807   44139 command_runner.go:130] >   containers_image_ostree_stub
	I0912 22:31:31.843816   44139 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0912 22:31:31.843824   44139 command_runner.go:130] >   btrfs_noversion
	I0912 22:31:31.843832   44139 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0912 22:31:31.843843   44139 command_runner.go:130] >   libdm_no_deferred_remove
	I0912 22:31:31.843847   44139 command_runner.go:130] >   seccomp
	I0912 22:31:31.843852   44139 command_runner.go:130] > LDFlags:          unknown
	I0912 22:31:31.843855   44139 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:31:31.843860   44139 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:31:31.843939   44139 ssh_runner.go:195] Run: crio --version
	I0912 22:31:31.874909   44139 command_runner.go:130] > crio version 1.29.1
	I0912 22:31:31.874934   44139 command_runner.go:130] > Version:        1.29.1
	I0912 22:31:31.874940   44139 command_runner.go:130] > GitCommit:      unknown
	I0912 22:31:31.874944   44139 command_runner.go:130] > GitCommitDate:  unknown
	I0912 22:31:31.874948   44139 command_runner.go:130] > GitTreeState:   clean
	I0912 22:31:31.874954   44139 command_runner.go:130] > BuildDate:      2024-09-12T19:33:02Z
	I0912 22:31:31.874958   44139 command_runner.go:130] > GoVersion:      go1.21.6
	I0912 22:31:31.874963   44139 command_runner.go:130] > Compiler:       gc
	I0912 22:31:31.874967   44139 command_runner.go:130] > Platform:       linux/amd64
	I0912 22:31:31.874971   44139 command_runner.go:130] > Linkmode:       dynamic
	I0912 22:31:31.874976   44139 command_runner.go:130] > BuildTags:      
	I0912 22:31:31.874983   44139 command_runner.go:130] >   containers_image_ostree_stub
	I0912 22:31:31.874990   44139 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0912 22:31:31.874995   44139 command_runner.go:130] >   btrfs_noversion
	I0912 22:31:31.875002   44139 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0912 22:31:31.875013   44139 command_runner.go:130] >   libdm_no_deferred_remove
	I0912 22:31:31.875019   44139 command_runner.go:130] >   seccomp
	I0912 22:31:31.875026   44139 command_runner.go:130] > LDFlags:          unknown
	I0912 22:31:31.875034   44139 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:31:31.875038   44139 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:31:31.878333   44139 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 22:31:31.879817   44139 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:31:31.882687   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:31.883054   44139 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:31:31.883081   44139 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:31:31.883271   44139 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:31:31.887368   44139 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0912 22:31:31.887481   44139 kubeadm.go:883] updating cluster {Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:31:31.887718   44139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:31:31.887767   44139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:31:31.930477   44139 command_runner.go:130] > {
	I0912 22:31:31.930505   44139 command_runner.go:130] >   "images": [
	I0912 22:31:31.930530   44139 command_runner.go:130] >     {
	I0912 22:31:31.930543   44139 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0912 22:31:31.930550   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930559   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0912 22:31:31.930565   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930572   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930585   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0912 22:31:31.930601   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0912 22:31:31.930608   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930615   44139 command_runner.go:130] >       "size": "87190579",
	I0912 22:31:31.930621   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930629   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930664   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930674   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930680   44139 command_runner.go:130] >     },
	I0912 22:31:31.930688   44139 command_runner.go:130] >     {
	I0912 22:31:31.930697   44139 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0912 22:31:31.930707   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930716   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0912 22:31:31.930725   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930732   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930746   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0912 22:31:31.930761   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0912 22:31:31.930770   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930779   44139 command_runner.go:130] >       "size": "1363676",
	I0912 22:31:31.930788   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930812   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930821   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930828   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930833   44139 command_runner.go:130] >     },
	I0912 22:31:31.930838   44139 command_runner.go:130] >     {
	I0912 22:31:31.930847   44139 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:31:31.930855   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.930871   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:31:31.930879   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930887   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.930900   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:31:31.930914   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:31:31.930924   44139 command_runner.go:130] >       ],
	I0912 22:31:31.930934   44139 command_runner.go:130] >       "size": "31470524",
	I0912 22:31:31.930943   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.930948   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.930957   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.930963   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.930971   44139 command_runner.go:130] >     },
	I0912 22:31:31.930977   44139 command_runner.go:130] >     {
	I0912 22:31:31.930988   44139 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0912 22:31:31.930997   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931005   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0912 22:31:31.931012   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931017   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931030   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0912 22:31:31.931051   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0912 22:31:31.931060   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931066   44139 command_runner.go:130] >       "size": "63273227",
	I0912 22:31:31.931075   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.931082   44139 command_runner.go:130] >       "username": "nonroot",
	I0912 22:31:31.931090   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931098   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931103   44139 command_runner.go:130] >     },
	I0912 22:31:31.931110   44139 command_runner.go:130] >     {
	I0912 22:31:31.931119   44139 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0912 22:31:31.931127   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931133   44139 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0912 22:31:31.931141   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931150   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931163   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0912 22:31:31.931179   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0912 22:31:31.931188   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931202   44139 command_runner.go:130] >       "size": "149009664",
	I0912 22:31:31.931211   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931221   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931229   44139 command_runner.go:130] >       },
	I0912 22:31:31.931234   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931242   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931248   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931255   44139 command_runner.go:130] >     },
	I0912 22:31:31.931260   44139 command_runner.go:130] >     {
	I0912 22:31:31.931271   44139 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0912 22:31:31.931280   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931288   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0912 22:31:31.931297   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931305   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931319   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0912 22:31:31.931333   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0912 22:31:31.931342   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931349   44139 command_runner.go:130] >       "size": "95237600",
	I0912 22:31:31.931358   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931365   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931374   44139 command_runner.go:130] >       },
	I0912 22:31:31.931381   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931390   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931399   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931406   44139 command_runner.go:130] >     },
	I0912 22:31:31.931411   44139 command_runner.go:130] >     {
	I0912 22:31:31.931423   44139 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0912 22:31:31.931433   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931442   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0912 22:31:31.931450   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931457   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931471   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0912 22:31:31.931486   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0912 22:31:31.931495   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931505   44139 command_runner.go:130] >       "size": "89437508",
	I0912 22:31:31.931518   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931536   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931545   44139 command_runner.go:130] >       },
	I0912 22:31:31.931552   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931562   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931571   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931578   44139 command_runner.go:130] >     },
	I0912 22:31:31.931586   44139 command_runner.go:130] >     {
	I0912 22:31:31.931595   44139 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0912 22:31:31.931601   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931611   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0912 22:31:31.931619   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931628   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931656   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0912 22:31:31.931670   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0912 22:31:31.931676   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931684   44139 command_runner.go:130] >       "size": "92733849",
	I0912 22:31:31.931693   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.931699   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931705   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931711   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931717   44139 command_runner.go:130] >     },
	I0912 22:31:31.931721   44139 command_runner.go:130] >     {
	I0912 22:31:31.931729   44139 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0912 22:31:31.931734   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931741   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0912 22:31:31.931746   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931751   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931761   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0912 22:31:31.931771   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0912 22:31:31.931775   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931782   44139 command_runner.go:130] >       "size": "68420934",
	I0912 22:31:31.931787   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931793   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.931798   44139 command_runner.go:130] >       },
	I0912 22:31:31.931804   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931814   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931831   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.931840   44139 command_runner.go:130] >     },
	I0912 22:31:31.931846   44139 command_runner.go:130] >     {
	I0912 22:31:31.931857   44139 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0912 22:31:31.931866   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.931880   44139 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0912 22:31:31.931889   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931897   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.931909   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0912 22:31:31.931922   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0912 22:31:31.931930   44139 command_runner.go:130] >       ],
	I0912 22:31:31.931936   44139 command_runner.go:130] >       "size": "742080",
	I0912 22:31:31.931943   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.931949   44139 command_runner.go:130] >         "value": "65535"
	I0912 22:31:31.931957   44139 command_runner.go:130] >       },
	I0912 22:31:31.931963   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.931969   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.931977   44139 command_runner.go:130] >       "pinned": true
	I0912 22:31:31.931982   44139 command_runner.go:130] >     }
	I0912 22:31:31.931990   44139 command_runner.go:130] >   ]
	I0912 22:31:31.931996   44139 command_runner.go:130] > }
	I0912 22:31:31.932311   44139 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:31:31.932336   44139 crio.go:433] Images already preloaded, skipping extraction
	I0912 22:31:31.932388   44139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:31:31.968061   44139 command_runner.go:130] > {
	I0912 22:31:31.968103   44139 command_runner.go:130] >   "images": [
	I0912 22:31:31.968109   44139 command_runner.go:130] >     {
	I0912 22:31:31.968117   44139 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0912 22:31:31.968122   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968129   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0912 22:31:31.968133   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968137   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968147   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0912 22:31:31.968154   44139 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0912 22:31:31.968158   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968163   44139 command_runner.go:130] >       "size": "87190579",
	I0912 22:31:31.968167   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968171   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968176   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968184   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968187   44139 command_runner.go:130] >     },
	I0912 22:31:31.968190   44139 command_runner.go:130] >     {
	I0912 22:31:31.968196   44139 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0912 22:31:31.968202   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968207   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0912 22:31:31.968211   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968215   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968221   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0912 22:31:31.968232   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0912 22:31:31.968236   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968244   44139 command_runner.go:130] >       "size": "1363676",
	I0912 22:31:31.968248   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968261   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968271   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968278   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968287   44139 command_runner.go:130] >     },
	I0912 22:31:31.968290   44139 command_runner.go:130] >     {
	I0912 22:31:31.968297   44139 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:31:31.968302   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968309   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:31:31.968314   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968318   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968326   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:31:31.968336   44139 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:31:31.968340   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968346   44139 command_runner.go:130] >       "size": "31470524",
	I0912 22:31:31.968350   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968355   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968361   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968365   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968369   44139 command_runner.go:130] >     },
	I0912 22:31:31.968372   44139 command_runner.go:130] >     {
	I0912 22:31:31.968380   44139 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0912 22:31:31.968385   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968391   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0912 22:31:31.968395   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968399   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968409   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0912 22:31:31.968426   44139 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0912 22:31:31.968434   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968438   44139 command_runner.go:130] >       "size": "63273227",
	I0912 22:31:31.968441   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968445   44139 command_runner.go:130] >       "username": "nonroot",
	I0912 22:31:31.968449   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968454   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968457   44139 command_runner.go:130] >     },
	I0912 22:31:31.968461   44139 command_runner.go:130] >     {
	I0912 22:31:31.968467   44139 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0912 22:31:31.968473   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968478   44139 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0912 22:31:31.968484   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968488   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968495   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0912 22:31:31.968511   44139 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0912 22:31:31.968518   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968523   44139 command_runner.go:130] >       "size": "149009664",
	I0912 22:31:31.968530   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968534   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968541   44139 command_runner.go:130] >       },
	I0912 22:31:31.968545   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968551   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968556   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968563   44139 command_runner.go:130] >     },
	I0912 22:31:31.968566   44139 command_runner.go:130] >     {
	I0912 22:31:31.968572   44139 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0912 22:31:31.968579   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968584   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0912 22:31:31.968591   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968595   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968602   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0912 22:31:31.968612   44139 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0912 22:31:31.968615   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968620   44139 command_runner.go:130] >       "size": "95237600",
	I0912 22:31:31.968627   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968631   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968635   44139 command_runner.go:130] >       },
	I0912 22:31:31.968639   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968643   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968647   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968650   44139 command_runner.go:130] >     },
	I0912 22:31:31.968653   44139 command_runner.go:130] >     {
	I0912 22:31:31.968663   44139 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0912 22:31:31.968667   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968675   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0912 22:31:31.968679   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968683   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968690   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0912 22:31:31.968700   44139 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0912 22:31:31.968704   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968709   44139 command_runner.go:130] >       "size": "89437508",
	I0912 22:31:31.968715   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968719   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968722   44139 command_runner.go:130] >       },
	I0912 22:31:31.968726   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968730   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968734   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968737   44139 command_runner.go:130] >     },
	I0912 22:31:31.968740   44139 command_runner.go:130] >     {
	I0912 22:31:31.968748   44139 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0912 22:31:31.968752   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968758   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0912 22:31:31.968766   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968771   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968791   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0912 22:31:31.968800   44139 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0912 22:31:31.968804   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968809   44139 command_runner.go:130] >       "size": "92733849",
	I0912 22:31:31.968815   44139 command_runner.go:130] >       "uid": null,
	I0912 22:31:31.968819   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968825   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968829   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968833   44139 command_runner.go:130] >     },
	I0912 22:31:31.968837   44139 command_runner.go:130] >     {
	I0912 22:31:31.968842   44139 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0912 22:31:31.968849   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968855   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0912 22:31:31.968865   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968878   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.968894   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0912 22:31:31.968912   44139 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0912 22:31:31.968920   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968925   44139 command_runner.go:130] >       "size": "68420934",
	I0912 22:31:31.968928   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.968933   44139 command_runner.go:130] >         "value": "0"
	I0912 22:31:31.968940   44139 command_runner.go:130] >       },
	I0912 22:31:31.968944   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.968948   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.968952   44139 command_runner.go:130] >       "pinned": false
	I0912 22:31:31.968956   44139 command_runner.go:130] >     },
	I0912 22:31:31.968959   44139 command_runner.go:130] >     {
	I0912 22:31:31.968968   44139 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0912 22:31:31.968972   44139 command_runner.go:130] >       "repoTags": [
	I0912 22:31:31.968980   44139 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0912 22:31:31.968985   44139 command_runner.go:130] >       ],
	I0912 22:31:31.968993   44139 command_runner.go:130] >       "repoDigests": [
	I0912 22:31:31.969001   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0912 22:31:31.969011   44139 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0912 22:31:31.969017   44139 command_runner.go:130] >       ],
	I0912 22:31:31.969021   44139 command_runner.go:130] >       "size": "742080",
	I0912 22:31:31.969028   44139 command_runner.go:130] >       "uid": {
	I0912 22:31:31.969032   44139 command_runner.go:130] >         "value": "65535"
	I0912 22:31:31.969036   44139 command_runner.go:130] >       },
	I0912 22:31:31.969040   44139 command_runner.go:130] >       "username": "",
	I0912 22:31:31.969047   44139 command_runner.go:130] >       "spec": null,
	I0912 22:31:31.969052   44139 command_runner.go:130] >       "pinned": true
	I0912 22:31:31.969055   44139 command_runner.go:130] >     }
	I0912 22:31:31.969058   44139 command_runner.go:130] >   ]
	I0912 22:31:31.969062   44139 command_runner.go:130] > }
	I0912 22:31:31.969183   44139 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:31:31.969194   44139 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:31:31.969202   44139 kubeadm.go:934] updating node { 192.168.39.28 8443 v1.31.1 crio true true} ...
	I0912 22:31:31.969308   44139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-768483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:31:31.969386   44139 ssh_runner.go:195] Run: crio config
	I0912 22:31:32.001728   44139 command_runner.go:130] ! time="2024-09-12 22:31:31.977601025Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0912 22:31:32.007980   44139 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0912 22:31:32.015771   44139 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0912 22:31:32.015789   44139 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0912 22:31:32.015799   44139 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0912 22:31:32.015804   44139 command_runner.go:130] > #
	I0912 22:31:32.015810   44139 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0912 22:31:32.015816   44139 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0912 22:31:32.015822   44139 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0912 22:31:32.015829   44139 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0912 22:31:32.015834   44139 command_runner.go:130] > # reload'.
	I0912 22:31:32.015840   44139 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0912 22:31:32.015849   44139 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0912 22:31:32.015861   44139 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0912 22:31:32.015872   44139 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0912 22:31:32.015880   44139 command_runner.go:130] > [crio]
	I0912 22:31:32.015888   44139 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0912 22:31:32.015898   44139 command_runner.go:130] > # containers images, in this directory.
	I0912 22:31:32.015905   44139 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0912 22:31:32.015918   44139 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0912 22:31:32.015929   44139 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0912 22:31:32.015942   44139 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0912 22:31:32.015949   44139 command_runner.go:130] > # imagestore = ""
	I0912 22:31:32.015959   44139 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0912 22:31:32.015972   44139 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0912 22:31:32.015981   44139 command_runner.go:130] > storage_driver = "overlay"
	I0912 22:31:32.015989   44139 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0912 22:31:32.015999   44139 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0912 22:31:32.016004   44139 command_runner.go:130] > storage_option = [
	I0912 22:31:32.016008   44139 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0912 22:31:32.016014   44139 command_runner.go:130] > ]
	I0912 22:31:32.016020   44139 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0912 22:31:32.016028   44139 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0912 22:31:32.016032   44139 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0912 22:31:32.016041   44139 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0912 22:31:32.016048   44139 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0912 22:31:32.016052   44139 command_runner.go:130] > # always happen on a node reboot
	I0912 22:31:32.016057   44139 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0912 22:31:32.016068   44139 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0912 22:31:32.016076   44139 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0912 22:31:32.016082   44139 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0912 22:31:32.016090   44139 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0912 22:31:32.016097   44139 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0912 22:31:32.016106   44139 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0912 22:31:32.016113   44139 command_runner.go:130] > # internal_wipe = true
	I0912 22:31:32.016120   44139 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0912 22:31:32.016143   44139 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0912 22:31:32.016158   44139 command_runner.go:130] > # internal_repair = false
	I0912 22:31:32.016163   44139 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0912 22:31:32.016170   44139 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0912 22:31:32.016178   44139 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0912 22:31:32.016183   44139 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0912 22:31:32.016191   44139 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0912 22:31:32.016197   44139 command_runner.go:130] > [crio.api]
	I0912 22:31:32.016202   44139 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0912 22:31:32.016208   44139 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0912 22:31:32.016213   44139 command_runner.go:130] > # IP address on which the stream server will listen.
	I0912 22:31:32.016217   44139 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0912 22:31:32.016224   44139 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0912 22:31:32.016229   44139 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0912 22:31:32.016234   44139 command_runner.go:130] > # stream_port = "0"
	I0912 22:31:32.016239   44139 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0912 22:31:32.016244   44139 command_runner.go:130] > # stream_enable_tls = false
	I0912 22:31:32.016250   44139 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0912 22:31:32.016256   44139 command_runner.go:130] > # stream_idle_timeout = ""
	I0912 22:31:32.016264   44139 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0912 22:31:32.016272   44139 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0912 22:31:32.016276   44139 command_runner.go:130] > # minutes.
	I0912 22:31:32.016282   44139 command_runner.go:130] > # stream_tls_cert = ""
	I0912 22:31:32.016288   44139 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0912 22:31:32.016296   44139 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0912 22:31:32.016300   44139 command_runner.go:130] > # stream_tls_key = ""
	I0912 22:31:32.016306   44139 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0912 22:31:32.016314   44139 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0912 22:31:32.016327   44139 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0912 22:31:32.016333   44139 command_runner.go:130] > # stream_tls_ca = ""
	I0912 22:31:32.016342   44139 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0912 22:31:32.016349   44139 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0912 22:31:32.016356   44139 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0912 22:31:32.016363   44139 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0912 22:31:32.016368   44139 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0912 22:31:32.016376   44139 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0912 22:31:32.016380   44139 command_runner.go:130] > [crio.runtime]
	I0912 22:31:32.016388   44139 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0912 22:31:32.016393   44139 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0912 22:31:32.016397   44139 command_runner.go:130] > # "nofile=1024:2048"
	I0912 22:31:32.016403   44139 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0912 22:31:32.016410   44139 command_runner.go:130] > # default_ulimits = [
	I0912 22:31:32.016414   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016420   44139 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0912 22:31:32.016427   44139 command_runner.go:130] > # no_pivot = false
	I0912 22:31:32.016433   44139 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0912 22:31:32.016442   44139 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0912 22:31:32.016447   44139 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0912 22:31:32.016455   44139 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0912 22:31:32.016460   44139 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0912 22:31:32.016466   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:31:32.016472   44139 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0912 22:31:32.016477   44139 command_runner.go:130] > # Cgroup setting for conmon
	I0912 22:31:32.016485   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0912 22:31:32.016488   44139 command_runner.go:130] > conmon_cgroup = "pod"
	I0912 22:31:32.016499   44139 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0912 22:31:32.016506   44139 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0912 22:31:32.016519   44139 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:31:32.016525   44139 command_runner.go:130] > conmon_env = [
	I0912 22:31:32.016531   44139 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0912 22:31:32.016536   44139 command_runner.go:130] > ]
	I0912 22:31:32.016542   44139 command_runner.go:130] > # Additional environment variables to set for all the
	I0912 22:31:32.016546   44139 command_runner.go:130] > # containers. These are overridden if set in the
	I0912 22:31:32.016554   44139 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0912 22:31:32.016559   44139 command_runner.go:130] > # default_env = [
	I0912 22:31:32.016565   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016574   44139 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0912 22:31:32.016583   44139 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0912 22:31:32.016590   44139 command_runner.go:130] > # selinux = false
	I0912 22:31:32.016596   44139 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0912 22:31:32.016604   44139 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0912 22:31:32.016610   44139 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0912 22:31:32.016614   44139 command_runner.go:130] > # seccomp_profile = ""
	I0912 22:31:32.016620   44139 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0912 22:31:32.016627   44139 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0912 22:31:32.016633   44139 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0912 22:31:32.016640   44139 command_runner.go:130] > # which might increase security.
	I0912 22:31:32.016644   44139 command_runner.go:130] > # This option is currently deprecated,
	I0912 22:31:32.016650   44139 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0912 22:31:32.016655   44139 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0912 22:31:32.016662   44139 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0912 22:31:32.016670   44139 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0912 22:31:32.016677   44139 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0912 22:31:32.016685   44139 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0912 22:31:32.016690   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.016697   44139 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0912 22:31:32.016703   44139 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0912 22:31:32.016710   44139 command_runner.go:130] > # the cgroup blockio controller.
	I0912 22:31:32.016714   44139 command_runner.go:130] > # blockio_config_file = ""
	I0912 22:31:32.016720   44139 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0912 22:31:32.016726   44139 command_runner.go:130] > # blockio parameters.
	I0912 22:31:32.016730   44139 command_runner.go:130] > # blockio_reload = false
	I0912 22:31:32.016736   44139 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0912 22:31:32.016742   44139 command_runner.go:130] > # irqbalance daemon.
	I0912 22:31:32.016747   44139 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0912 22:31:32.016753   44139 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0912 22:31:32.016761   44139 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0912 22:31:32.016767   44139 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0912 22:31:32.016775   44139 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0912 22:31:32.016782   44139 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0912 22:31:32.016788   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.016793   44139 command_runner.go:130] > # rdt_config_file = ""
	I0912 22:31:32.016801   44139 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0912 22:31:32.016805   44139 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0912 22:31:32.016821   44139 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0912 22:31:32.016827   44139 command_runner.go:130] > # separate_pull_cgroup = ""
	I0912 22:31:32.016833   44139 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0912 22:31:32.016841   44139 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0912 22:31:32.016845   44139 command_runner.go:130] > # will be added.
	I0912 22:31:32.016850   44139 command_runner.go:130] > # default_capabilities = [
	I0912 22:31:32.016854   44139 command_runner.go:130] > # 	"CHOWN",
	I0912 22:31:32.016860   44139 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0912 22:31:32.016864   44139 command_runner.go:130] > # 	"FSETID",
	I0912 22:31:32.016867   44139 command_runner.go:130] > # 	"FOWNER",
	I0912 22:31:32.016871   44139 command_runner.go:130] > # 	"SETGID",
	I0912 22:31:32.016874   44139 command_runner.go:130] > # 	"SETUID",
	I0912 22:31:32.016878   44139 command_runner.go:130] > # 	"SETPCAP",
	I0912 22:31:32.016882   44139 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0912 22:31:32.016886   44139 command_runner.go:130] > # 	"KILL",
	I0912 22:31:32.016890   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016897   44139 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0912 22:31:32.016906   44139 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0912 22:31:32.016911   44139 command_runner.go:130] > # add_inheritable_capabilities = false
	I0912 22:31:32.016917   44139 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0912 22:31:32.016923   44139 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:31:32.016929   44139 command_runner.go:130] > default_sysctls = [
	I0912 22:31:32.016934   44139 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0912 22:31:32.016937   44139 command_runner.go:130] > ]
	I0912 22:31:32.016942   44139 command_runner.go:130] > # List of devices on the host that a
	I0912 22:31:32.016950   44139 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0912 22:31:32.016954   44139 command_runner.go:130] > # allowed_devices = [
	I0912 22:31:32.016959   44139 command_runner.go:130] > # 	"/dev/fuse",
	I0912 22:31:32.016962   44139 command_runner.go:130] > # ]
	I0912 22:31:32.016967   44139 command_runner.go:130] > # List of additional devices. specified as
	I0912 22:31:32.016973   44139 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0912 22:31:32.016980   44139 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0912 22:31:32.016986   44139 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:31:32.016992   44139 command_runner.go:130] > # additional_devices = [
	I0912 22:31:32.016996   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017001   44139 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0912 22:31:32.017007   44139 command_runner.go:130] > # cdi_spec_dirs = [
	I0912 22:31:32.017011   44139 command_runner.go:130] > # 	"/etc/cdi",
	I0912 22:31:32.017014   44139 command_runner.go:130] > # 	"/var/run/cdi",
	I0912 22:31:32.017020   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017026   44139 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0912 22:31:32.017033   44139 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0912 22:31:32.017038   44139 command_runner.go:130] > # Defaults to false.
	I0912 22:31:32.017045   44139 command_runner.go:130] > # device_ownership_from_security_context = false
	I0912 22:31:32.017051   44139 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0912 22:31:32.017056   44139 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0912 22:31:32.017061   44139 command_runner.go:130] > # hooks_dir = [
	I0912 22:31:32.017065   44139 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0912 22:31:32.017071   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017076   44139 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0912 22:31:32.017085   44139 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0912 22:31:32.017090   44139 command_runner.go:130] > # its default mounts from the following two files:
	I0912 22:31:32.017095   44139 command_runner.go:130] > #
	I0912 22:31:32.017101   44139 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0912 22:31:32.017108   44139 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0912 22:31:32.017113   44139 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0912 22:31:32.017118   44139 command_runner.go:130] > #
	I0912 22:31:32.017124   44139 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0912 22:31:32.017132   44139 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0912 22:31:32.017138   44139 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0912 22:31:32.017143   44139 command_runner.go:130] > #      only add mounts it finds in this file.
	I0912 22:31:32.017148   44139 command_runner.go:130] > #
	I0912 22:31:32.017152   44139 command_runner.go:130] > # default_mounts_file = ""
	I0912 22:31:32.017159   44139 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0912 22:31:32.017165   44139 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0912 22:31:32.017171   44139 command_runner.go:130] > pids_limit = 1024
	I0912 22:31:32.017177   44139 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0912 22:31:32.017184   44139 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0912 22:31:32.017190   44139 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0912 22:31:32.017200   44139 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0912 22:31:32.017204   44139 command_runner.go:130] > # log_size_max = -1
	I0912 22:31:32.017211   44139 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0912 22:31:32.017218   44139 command_runner.go:130] > # log_to_journald = false
	I0912 22:31:32.017223   44139 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0912 22:31:32.017228   44139 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0912 22:31:32.017233   44139 command_runner.go:130] > # Path to directory for container attach sockets.
	I0912 22:31:32.017238   44139 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0912 22:31:32.017246   44139 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0912 22:31:32.017250   44139 command_runner.go:130] > # bind_mount_prefix = ""
	I0912 22:31:32.017258   44139 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0912 22:31:32.017264   44139 command_runner.go:130] > # read_only = false
	I0912 22:31:32.017275   44139 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0912 22:31:32.017285   44139 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0912 22:31:32.017294   44139 command_runner.go:130] > # live configuration reload.
	I0912 22:31:32.017300   44139 command_runner.go:130] > # log_level = "info"
	I0912 22:31:32.017311   44139 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0912 22:31:32.017321   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.017327   44139 command_runner.go:130] > # log_filter = ""
	I0912 22:31:32.017339   44139 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0912 22:31:32.017353   44139 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0912 22:31:32.017362   44139 command_runner.go:130] > # separated by comma.
	I0912 22:31:32.017373   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017383   44139 command_runner.go:130] > # uid_mappings = ""
	I0912 22:31:32.017392   44139 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0912 22:31:32.017404   44139 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0912 22:31:32.017413   44139 command_runner.go:130] > # separated by comma.
	I0912 22:31:32.017420   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017427   44139 command_runner.go:130] > # gid_mappings = ""
	I0912 22:31:32.017433   44139 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0912 22:31:32.017441   44139 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:31:32.017447   44139 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:31:32.017456   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017462   44139 command_runner.go:130] > # minimum_mappable_uid = -1
	I0912 22:31:32.017470   44139 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0912 22:31:32.017476   44139 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:31:32.017483   44139 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:31:32.017493   44139 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0912 22:31:32.017499   44139 command_runner.go:130] > # minimum_mappable_gid = -1
	I0912 22:31:32.017506   44139 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0912 22:31:32.017518   44139 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0912 22:31:32.017526   44139 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0912 22:31:32.017531   44139 command_runner.go:130] > # ctr_stop_timeout = 30
	I0912 22:31:32.017539   44139 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0912 22:31:32.017545   44139 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0912 22:31:32.017549   44139 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0912 22:31:32.017557   44139 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0912 22:31:32.017562   44139 command_runner.go:130] > drop_infra_ctr = false
	I0912 22:31:32.017567   44139 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0912 22:31:32.017575   44139 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0912 22:31:32.017582   44139 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0912 22:31:32.017587   44139 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0912 22:31:32.017594   44139 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0912 22:31:32.017601   44139 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0912 22:31:32.017607   44139 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0912 22:31:32.017622   44139 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0912 22:31:32.017626   44139 command_runner.go:130] > # shared_cpuset = ""
	I0912 22:31:32.017632   44139 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0912 22:31:32.017639   44139 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0912 22:31:32.017644   44139 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0912 22:31:32.017653   44139 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0912 22:31:32.017659   44139 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0912 22:31:32.017665   44139 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0912 22:31:32.017685   44139 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0912 22:31:32.017695   44139 command_runner.go:130] > # enable_criu_support = false
	I0912 22:31:32.017701   44139 command_runner.go:130] > # Enable/disable the generation of the container,
	I0912 22:31:32.017707   44139 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0912 22:31:32.017711   44139 command_runner.go:130] > # enable_pod_events = false
	I0912 22:31:32.017717   44139 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:31:32.017725   44139 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:31:32.017731   44139 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0912 22:31:32.017737   44139 command_runner.go:130] > # default_runtime = "runc"
	I0912 22:31:32.017742   44139 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0912 22:31:32.017751   44139 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0912 22:31:32.017762   44139 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0912 22:31:32.017770   44139 command_runner.go:130] > # creation as a file is not desired either.
	I0912 22:31:32.017777   44139 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0912 22:31:32.017785   44139 command_runner.go:130] > # the hostname is being managed dynamically.
	I0912 22:31:32.017790   44139 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0912 22:31:32.017795   44139 command_runner.go:130] > # ]
	I0912 22:31:32.017801   44139 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0912 22:31:32.017809   44139 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0912 22:31:32.017815   44139 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0912 22:31:32.017822   44139 command_runner.go:130] > # Each entry in the table should follow the format:
	I0912 22:31:32.017826   44139 command_runner.go:130] > #
	I0912 22:31:32.017831   44139 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0912 22:31:32.017838   44139 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0912 22:31:32.017885   44139 command_runner.go:130] > # runtime_type = "oci"
	I0912 22:31:32.017893   44139 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0912 22:31:32.017897   44139 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0912 22:31:32.017901   44139 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0912 22:31:32.017905   44139 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0912 22:31:32.017909   44139 command_runner.go:130] > # monitor_env = []
	I0912 22:31:32.017914   44139 command_runner.go:130] > # privileged_without_host_devices = false
	I0912 22:31:32.017918   44139 command_runner.go:130] > # allowed_annotations = []
	I0912 22:31:32.017922   44139 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0912 22:31:32.017928   44139 command_runner.go:130] > # Where:
	I0912 22:31:32.017933   44139 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0912 22:31:32.017941   44139 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0912 22:31:32.017947   44139 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0912 22:31:32.017956   44139 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0912 22:31:32.017960   44139 command_runner.go:130] > #   in $PATH.
	I0912 22:31:32.017966   44139 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0912 22:31:32.017973   44139 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0912 22:31:32.017983   44139 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0912 22:31:32.017988   44139 command_runner.go:130] > #   state.
	I0912 22:31:32.017994   44139 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0912 22:31:32.018002   44139 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0912 22:31:32.018008   44139 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0912 22:31:32.018017   44139 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0912 22:31:32.018023   44139 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0912 22:31:32.018032   44139 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0912 22:31:32.018036   44139 command_runner.go:130] > #   The currently recognized values are:
	I0912 22:31:32.018044   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0912 22:31:32.018052   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0912 22:31:32.018059   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0912 22:31:32.018067   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0912 22:31:32.018076   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0912 22:31:32.018083   44139 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0912 22:31:32.018091   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0912 22:31:32.018097   44139 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0912 22:31:32.018105   44139 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0912 22:31:32.018111   44139 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0912 22:31:32.018118   44139 command_runner.go:130] > #   deprecated option "conmon".
	I0912 22:31:32.018124   44139 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0912 22:31:32.018131   44139 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0912 22:31:32.018137   44139 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0912 22:31:32.018142   44139 command_runner.go:130] > #   should be moved to the container's cgroup
	I0912 22:31:32.018150   44139 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0912 22:31:32.018155   44139 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0912 22:31:32.018163   44139 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0912 22:31:32.018168   44139 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0912 22:31:32.018172   44139 command_runner.go:130] > #
	I0912 22:31:32.018177   44139 command_runner.go:130] > # Using the seccomp notifier feature:
	I0912 22:31:32.018182   44139 command_runner.go:130] > #
	I0912 22:31:32.018188   44139 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0912 22:31:32.018195   44139 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0912 22:31:32.018200   44139 command_runner.go:130] > #
	I0912 22:31:32.018206   44139 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0912 22:31:32.018214   44139 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0912 22:31:32.018217   44139 command_runner.go:130] > #
	I0912 22:31:32.018223   44139 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0912 22:31:32.018228   44139 command_runner.go:130] > # feature.
	I0912 22:31:32.018231   44139 command_runner.go:130] > #
	I0912 22:31:32.018237   44139 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0912 22:31:32.018246   44139 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0912 22:31:32.018252   44139 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0912 22:31:32.018259   44139 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0912 22:31:32.018267   44139 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0912 22:31:32.018270   44139 command_runner.go:130] > #
	I0912 22:31:32.018277   44139 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0912 22:31:32.018284   44139 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0912 22:31:32.018287   44139 command_runner.go:130] > #
	I0912 22:31:32.018293   44139 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0912 22:31:32.018301   44139 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0912 22:31:32.018304   44139 command_runner.go:130] > #
	I0912 22:31:32.018310   44139 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0912 22:31:32.018316   44139 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0912 22:31:32.018319   44139 command_runner.go:130] > # limitation.
	I0912 22:31:32.018325   44139 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0912 22:31:32.018332   44139 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0912 22:31:32.018336   44139 command_runner.go:130] > runtime_type = "oci"
	I0912 22:31:32.018340   44139 command_runner.go:130] > runtime_root = "/run/runc"
	I0912 22:31:32.018344   44139 command_runner.go:130] > runtime_config_path = ""
	I0912 22:31:32.018349   44139 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0912 22:31:32.018353   44139 command_runner.go:130] > monitor_cgroup = "pod"
	I0912 22:31:32.018357   44139 command_runner.go:130] > monitor_exec_cgroup = ""
	I0912 22:31:32.018363   44139 command_runner.go:130] > monitor_env = [
	I0912 22:31:32.018369   44139 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0912 22:31:32.018373   44139 command_runner.go:130] > ]
	I0912 22:31:32.018378   44139 command_runner.go:130] > privileged_without_host_devices = false
	I0912 22:31:32.018386   44139 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0912 22:31:32.018391   44139 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0912 22:31:32.018397   44139 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0912 22:31:32.018407   44139 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0912 22:31:32.018414   44139 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0912 22:31:32.018422   44139 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0912 22:31:32.018430   44139 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0912 22:31:32.018439   44139 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0912 22:31:32.018445   44139 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0912 22:31:32.018452   44139 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0912 22:31:32.018459   44139 command_runner.go:130] > # Example:
	I0912 22:31:32.018463   44139 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0912 22:31:32.018470   44139 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0912 22:31:32.018475   44139 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0912 22:31:32.018479   44139 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0912 22:31:32.018482   44139 command_runner.go:130] > # cpuset = 0
	I0912 22:31:32.018486   44139 command_runner.go:130] > # cpushares = "0-1"
	I0912 22:31:32.018490   44139 command_runner.go:130] > # Where:
	I0912 22:31:32.018495   44139 command_runner.go:130] > # The workload name is workload-type.
	I0912 22:31:32.018503   44139 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0912 22:31:32.018508   44139 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0912 22:31:32.018517   44139 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0912 22:31:32.018525   44139 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0912 22:31:32.018533   44139 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0912 22:31:32.018538   44139 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0912 22:31:32.018545   44139 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0912 22:31:32.018552   44139 command_runner.go:130] > # Default value is set to true
	I0912 22:31:32.018556   44139 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0912 22:31:32.018561   44139 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0912 22:31:32.018568   44139 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0912 22:31:32.018573   44139 command_runner.go:130] > # Default value is set to 'false'
	I0912 22:31:32.018579   44139 command_runner.go:130] > # disable_hostport_mapping = false
	I0912 22:31:32.018586   44139 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0912 22:31:32.018589   44139 command_runner.go:130] > #
	I0912 22:31:32.018594   44139 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0912 22:31:32.018600   44139 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0912 22:31:32.018605   44139 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0912 22:31:32.018611   44139 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0912 22:31:32.018617   44139 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0912 22:31:32.018621   44139 command_runner.go:130] > [crio.image]
	I0912 22:31:32.018627   44139 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0912 22:31:32.018631   44139 command_runner.go:130] > # default_transport = "docker://"
	I0912 22:31:32.018637   44139 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0912 22:31:32.018643   44139 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:31:32.018646   44139 command_runner.go:130] > # global_auth_file = ""
	I0912 22:31:32.018651   44139 command_runner.go:130] > # The image used to instantiate infra containers.
	I0912 22:31:32.018656   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.018661   44139 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0912 22:31:32.018667   44139 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0912 22:31:32.018672   44139 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:31:32.018677   44139 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:31:32.018681   44139 command_runner.go:130] > # pause_image_auth_file = ""
	I0912 22:31:32.018686   44139 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0912 22:31:32.018692   44139 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0912 22:31:32.018697   44139 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0912 22:31:32.018702   44139 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0912 22:31:32.018707   44139 command_runner.go:130] > # pause_command = "/pause"
	I0912 22:31:32.018712   44139 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0912 22:31:32.018717   44139 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0912 22:31:32.018722   44139 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0912 22:31:32.018729   44139 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0912 22:31:32.018734   44139 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0912 22:31:32.018740   44139 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0912 22:31:32.018744   44139 command_runner.go:130] > # pinned_images = [
	I0912 22:31:32.018747   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018753   44139 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0912 22:31:32.018759   44139 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0912 22:31:32.018764   44139 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0912 22:31:32.018772   44139 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0912 22:31:32.018777   44139 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0912 22:31:32.018781   44139 command_runner.go:130] > # signature_policy = ""
	I0912 22:31:32.018786   44139 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0912 22:31:32.018792   44139 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0912 22:31:32.018798   44139 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0912 22:31:32.018803   44139 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0912 22:31:32.018809   44139 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0912 22:31:32.018816   44139 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0912 22:31:32.018822   44139 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0912 22:31:32.018829   44139 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0912 22:31:32.018834   44139 command_runner.go:130] > # changing them here.
	I0912 22:31:32.018838   44139 command_runner.go:130] > # insecure_registries = [
	I0912 22:31:32.018841   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018848   44139 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0912 22:31:32.018856   44139 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0912 22:31:32.018860   44139 command_runner.go:130] > # image_volumes = "mkdir"
	I0912 22:31:32.018865   44139 command_runner.go:130] > # Temporary directory to use for storing big files
	I0912 22:31:32.018871   44139 command_runner.go:130] > # big_files_temporary_dir = ""
	I0912 22:31:32.018876   44139 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0912 22:31:32.018880   44139 command_runner.go:130] > # CNI plugins.
	I0912 22:31:32.018884   44139 command_runner.go:130] > [crio.network]
	I0912 22:31:32.018889   44139 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0912 22:31:32.018897   44139 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0912 22:31:32.018901   44139 command_runner.go:130] > # cni_default_network = ""
	I0912 22:31:32.018908   44139 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0912 22:31:32.018913   44139 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0912 22:31:32.018920   44139 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0912 22:31:32.018925   44139 command_runner.go:130] > # plugin_dirs = [
	I0912 22:31:32.018931   44139 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0912 22:31:32.018935   44139 command_runner.go:130] > # ]
	I0912 22:31:32.018940   44139 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0912 22:31:32.018946   44139 command_runner.go:130] > [crio.metrics]
	I0912 22:31:32.018950   44139 command_runner.go:130] > # Globally enable or disable metrics support.
	I0912 22:31:32.018955   44139 command_runner.go:130] > enable_metrics = true
	I0912 22:31:32.018959   44139 command_runner.go:130] > # Specify enabled metrics collectors.
	I0912 22:31:32.018965   44139 command_runner.go:130] > # Per default all metrics are enabled.
	I0912 22:31:32.018971   44139 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0912 22:31:32.018980   44139 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0912 22:31:32.018985   44139 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0912 22:31:32.018991   44139 command_runner.go:130] > # metrics_collectors = [
	I0912 22:31:32.018995   44139 command_runner.go:130] > # 	"operations",
	I0912 22:31:32.019002   44139 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0912 22:31:32.019006   44139 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0912 22:31:32.019014   44139 command_runner.go:130] > # 	"operations_errors",
	I0912 22:31:32.019018   44139 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0912 22:31:32.019021   44139 command_runner.go:130] > # 	"image_pulls_by_name",
	I0912 22:31:32.019026   44139 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0912 22:31:32.019030   44139 command_runner.go:130] > # 	"image_pulls_failures",
	I0912 22:31:32.019034   44139 command_runner.go:130] > # 	"image_pulls_successes",
	I0912 22:31:32.019040   44139 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0912 22:31:32.019046   44139 command_runner.go:130] > # 	"image_layer_reuse",
	I0912 22:31:32.019051   44139 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0912 22:31:32.019056   44139 command_runner.go:130] > # 	"containers_oom_total",
	I0912 22:31:32.019060   44139 command_runner.go:130] > # 	"containers_oom",
	I0912 22:31:32.019066   44139 command_runner.go:130] > # 	"processes_defunct",
	I0912 22:31:32.019071   44139 command_runner.go:130] > # 	"operations_total",
	I0912 22:31:32.019075   44139 command_runner.go:130] > # 	"operations_latency_seconds",
	I0912 22:31:32.019081   44139 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0912 22:31:32.019086   44139 command_runner.go:130] > # 	"operations_errors_total",
	I0912 22:31:32.019092   44139 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0912 22:31:32.019096   44139 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0912 22:31:32.019101   44139 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0912 22:31:32.019105   44139 command_runner.go:130] > # 	"image_pulls_success_total",
	I0912 22:31:32.019109   44139 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0912 22:31:32.019113   44139 command_runner.go:130] > # 	"containers_oom_count_total",
	I0912 22:31:32.019118   44139 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0912 22:31:32.019124   44139 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0912 22:31:32.019128   44139 command_runner.go:130] > # ]
	I0912 22:31:32.019132   44139 command_runner.go:130] > # The port on which the metrics server will listen.
	I0912 22:31:32.019137   44139 command_runner.go:130] > # metrics_port = 9090
	I0912 22:31:32.019142   44139 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0912 22:31:32.019148   44139 command_runner.go:130] > # metrics_socket = ""
	I0912 22:31:32.019153   44139 command_runner.go:130] > # The certificate for the secure metrics server.
	I0912 22:31:32.019161   44139 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0912 22:31:32.019167   44139 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0912 22:31:32.019173   44139 command_runner.go:130] > # certificate on any modification event.
	I0912 22:31:32.019177   44139 command_runner.go:130] > # metrics_cert = ""
	I0912 22:31:32.019184   44139 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0912 22:31:32.019189   44139 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0912 22:31:32.019193   44139 command_runner.go:130] > # metrics_key = ""
	I0912 22:31:32.019198   44139 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0912 22:31:32.019204   44139 command_runner.go:130] > [crio.tracing]
	I0912 22:31:32.019210   44139 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0912 22:31:32.019215   44139 command_runner.go:130] > # enable_tracing = false
	I0912 22:31:32.019220   44139 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0912 22:31:32.019231   44139 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0912 22:31:32.019238   44139 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0912 22:31:32.019246   44139 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0912 22:31:32.019250   44139 command_runner.go:130] > # CRI-O NRI configuration.
	I0912 22:31:32.019256   44139 command_runner.go:130] > [crio.nri]
	I0912 22:31:32.019260   44139 command_runner.go:130] > # Globally enable or disable NRI.
	I0912 22:31:32.019267   44139 command_runner.go:130] > # enable_nri = false
	I0912 22:31:32.019272   44139 command_runner.go:130] > # NRI socket to listen on.
	I0912 22:31:32.019276   44139 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0912 22:31:32.019281   44139 command_runner.go:130] > # NRI plugin directory to use.
	I0912 22:31:32.019288   44139 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0912 22:31:32.019293   44139 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0912 22:31:32.019299   44139 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0912 22:31:32.019305   44139 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0912 22:31:32.019311   44139 command_runner.go:130] > # nri_disable_connections = false
	I0912 22:31:32.019316   44139 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0912 22:31:32.019324   44139 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0912 22:31:32.019329   44139 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0912 22:31:32.019335   44139 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0912 22:31:32.019341   44139 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0912 22:31:32.019347   44139 command_runner.go:130] > [crio.stats]
	I0912 22:31:32.019352   44139 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0912 22:31:32.019359   44139 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0912 22:31:32.019363   44139 command_runner.go:130] > # stats_collection_period = 0
	I0912 22:31:32.019468   44139 cni.go:84] Creating CNI manager for ""
	I0912 22:31:32.019478   44139 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0912 22:31:32.019485   44139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:31:32.019503   44139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-768483 NodeName:multinode-768483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:31:32.019637   44139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-768483"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:31:32.019692   44139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:31:32.029344   44139 command_runner.go:130] > kubeadm
	I0912 22:31:32.029363   44139 command_runner.go:130] > kubectl
	I0912 22:31:32.029368   44139 command_runner.go:130] > kubelet
	I0912 22:31:32.029399   44139 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:31:32.029448   44139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:31:32.038147   44139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0912 22:31:32.053371   44139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:31:32.069237   44139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0912 22:31:32.085783   44139 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I0912 22:31:32.089428   44139 command_runner.go:130] > 192.168.39.28	control-plane.minikube.internal
	I0912 22:31:32.089534   44139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:31:32.233049   44139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:31:32.248312   44139 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483 for IP: 192.168.39.28
	I0912 22:31:32.248336   44139 certs.go:194] generating shared ca certs ...
	I0912 22:31:32.248360   44139 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:31:32.248532   44139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:31:32.248595   44139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:31:32.248607   44139 certs.go:256] generating profile certs ...
	I0912 22:31:32.248701   44139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/client.key
	I0912 22:31:32.248798   44139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key.832235e5
	I0912 22:31:32.248853   44139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key
	I0912 22:31:32.248867   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:31:32.248880   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:31:32.248895   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:31:32.248908   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:31:32.248918   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 22:31:32.248931   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 22:31:32.248943   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 22:31:32.248955   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 22:31:32.249002   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:31:32.249030   44139 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:31:32.249039   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:31:32.249062   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:31:32.249086   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:31:32.249112   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:31:32.249162   44139 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:31:32.249192   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem -> /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.249205   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.249218   44139 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.249842   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:31:32.272501   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:31:32.294737   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:31:32.317199   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:31:32.340117   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 22:31:32.361857   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:31:32.384027   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:31:32.407198   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/multinode-768483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:31:32.429914   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:31:32.451815   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:31:32.473951   44139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:31:32.495404   44139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:31:32.510703   44139 ssh_runner.go:195] Run: openssl version
	I0912 22:31:32.516096   44139 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0912 22:31:32.516195   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:31:32.526140   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530111   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530201   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.530246   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:31:32.535219   44139 command_runner.go:130] > 51391683
	I0912 22:31:32.535312   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:31:32.544361   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:31:32.555352   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559360   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559401   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.559447   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:31:32.564696   44139 command_runner.go:130] > 3ec20f2e
	I0912 22:31:32.564785   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:31:32.574117   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:31:32.584589   44139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588793   44139 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588826   44139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.588872   44139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:31:32.594011   44139 command_runner.go:130] > b5213941
	I0912 22:31:32.594170   44139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:31:32.603511   44139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:31:32.607489   44139 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:31:32.607513   44139 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0912 22:31:32.607521   44139 command_runner.go:130] > Device: 253,1	Inode: 4195880     Links: 1
	I0912 22:31:32.607530   44139 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:31:32.607541   44139 command_runner.go:130] > Access: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607549   44139 command_runner.go:130] > Modify: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607561   44139 command_runner.go:130] > Change: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607576   44139 command_runner.go:130] >  Birth: 2024-09-12 22:24:41.686828618 +0000
	I0912 22:31:32.607717   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:31:32.612924   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.613042   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:31:32.618236   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.618302   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:31:32.623484   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.623543   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:31:32.628627   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.628783   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:31:32.634124   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.634182   44139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:31:32.639789   44139 command_runner.go:130] > Certificate will not expire
	I0912 22:31:32.639858   44139 kubeadm.go:392] StartCluster: {Name:multinode-768483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-768483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.92 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:31:32.639954   44139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:31:32.639996   44139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:31:32.674400   44139 command_runner.go:130] > 839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b
	I0912 22:31:32.674422   44139 command_runner.go:130] > e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd
	I0912 22:31:32.674429   44139 command_runner.go:130] > 804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c
	I0912 22:31:32.674453   44139 command_runner.go:130] > 843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679
	I0912 22:31:32.674459   44139 command_runner.go:130] > 6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693
	I0912 22:31:32.674465   44139 command_runner.go:130] > f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510
	I0912 22:31:32.674470   44139 command_runner.go:130] > c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b
	I0912 22:31:32.674479   44139 command_runner.go:130] > f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943
	I0912 22:31:32.675873   44139 cri.go:89] found id: "839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b"
	I0912 22:31:32.675889   44139 cri.go:89] found id: "e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd"
	I0912 22:31:32.675892   44139 cri.go:89] found id: "804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c"
	I0912 22:31:32.675895   44139 cri.go:89] found id: "843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679"
	I0912 22:31:32.675898   44139 cri.go:89] found id: "6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693"
	I0912 22:31:32.675901   44139 cri.go:89] found id: "f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510"
	I0912 22:31:32.675904   44139 cri.go:89] found id: "c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b"
	I0912 22:31:32.675906   44139 cri.go:89] found id: "f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943"
	I0912 22:31:32.675908   44139 cri.go:89] found id: ""
	I0912 22:31:32.675947   44139 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.154607401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180542154583755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1716d2bf-ca3f-4361-acf4-1665ad9229eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.155084040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a9b9f3-ce29-4755-9328-e15ad0539f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.155139721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a9b9f3-ce29-4755-9328-e15ad0539f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.155498263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4a9b9f3-ce29-4755-9328-e15ad0539f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.200835982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a46183d-9871-4d3b-8b97-6a47a765c1bf name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.200910564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a46183d-9871-4d3b-8b97-6a47a765c1bf name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.202177782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d993bb4-4c91-4e1d-b951-ab27b6f332eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.202601317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180542202577389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d993bb4-4c91-4e1d-b951-ab27b6f332eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.203119491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e6f5442-a216-4ba4-8ddd-9ca0d2f909ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.203180498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e6f5442-a216-4ba4-8ddd-9ca0d2f909ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.203517179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e6f5442-a216-4ba4-8ddd-9ca0d2f909ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.246695418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc5394e9-674b-4653-934f-07c53ff62afc name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.246770439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc5394e9-674b-4653-934f-07c53ff62afc name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.247879046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c68d9a5-3f41-4598-bc5e-23e211e06307 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.248301288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180542248276158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c68d9a5-3f41-4598-bc5e-23e211e06307 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.248840436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e964e96-eef9-44bf-9d43-a131ee743d22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.248914543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e964e96-eef9-44bf-9d43-a131ee743d22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.249261978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e964e96-eef9-44bf-9d43-a131ee743d22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.296330054Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4e95f4d-bbe7-4e82-9d1e-85636e7346e8 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.296417742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4e95f4d-bbe7-4e82-9d1e-85636e7346e8 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.297917418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c38bab7c-c254-487f-a4ab-d188f41a5459 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.298554581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180542298529169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c38bab7c-c254-487f-a4ab-d188f41a5459 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.299233498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54c849a4-75db-4886-9d65-f96471b3bd7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.299331128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54c849a4-75db-4886-9d65-f96471b3bd7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:35:42 multinode-768483 crio[2713]: time="2024-09-12 22:35:42.299741150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:930d1eb00fcd21018473d187f1b5bdd6fc27daf70eb0f804df8104804497cc13,PodSandboxId:c6239dc721426f56c075b0663ff81d756798b98533c230ee53fa840a966d74ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726180332375080601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba,PodSandboxId:b603ca5480f2f96558d31545e06b3f26e828758e36e1dbc16728b76e494e0519,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726180298869535596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f,PodSandboxId:756201f6d3b7a292f6b5e58b7a1728612c1fb40bc34dcbe5281c9b237fb48e19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726180298750190891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62b93bbc679b765a589b99394ad8b21d32551806afcf44f52ac8cd35367011e,PodSandboxId:a00b3cfc40e629dfeed3555f1842485747ea42a2181fc0e16b18fdff5f49d392,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726180298677157438,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759,PodSandboxId:016f43c033e89af0b5c5cefdfb21b38c7c34249bdcf245821ae831b13f27946e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726180298608983619,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873,PodSandboxId:7449a7ae76b79799552482dfc8ed6b15505c61cddfa6a4090e31fc0301af7ff8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726180294812610738,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47,PodSandboxId:ff2bc6d006554860ffb8bb51d6c5bd4d3f419e416ab908666baeb9ae6286a564,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726180294812365435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97,PodSandboxId:04fbbb040cb54cf92ab5fb6659676e87412332a56c38d51f9c8afb8ec85b5208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726180294779970366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58,PodSandboxId:9bc9674b70411c4b05a546189ac1765104e8557d558a4198b3c9b46b1f5abc23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726180294768298184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe44370991681155ae8ed22879cba8054fedfb236507195aa20d687e65678d4,PodSandboxId:0d89618e7dc5c0853a0788b683c015ed66169976615655aa786db93523529ad8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726179969974291413,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2jcd4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7874a33-b52f-451b-8713-bae3c8ec17a8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b,PodSandboxId:04fac0aee67c0f950c1294befc487d5122076819b9a0c73b39218dd7976f5b5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726179910063866210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w278g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20e1929b-38ac-48af-8b79-c509239e17b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c17ba1a9c6116b065e200c70dde80d097578700da517f9acb2ca265d842bdd,PodSandboxId:66bc1a0adc24b6cc46938afb36a4f1953051814ffde811bfdd25c1801ee2c186,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726179909132622563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4deaf81-faf5-43ce-a749-795eb9f371af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c,PodSandboxId:b7e0e7dd96357f54d1bf3f85393ab2e08a53ee317418e2d7ac01a6c2aa0d5b39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726179897702018256,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2w9d,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 825e8f9f-58fd-496f-a248-70560c4476b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679,PodSandboxId:1e9212d7a6491394ae383087b13bb8f45ea0ff34d55437ff096ea1cead68e4e0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726179897128914199,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tt4f9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa619f45-dfb9-4552-bacb
-661f79cde4f6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693,PodSandboxId:efb782701ae2bbc77f1bd3e27d7cb2e929d7e3a3c950626976dd5badfa7a512b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726179885916478220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 780a9cbe76741d4b5b1a8e6a72ff3261,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510,PodSandboxId:5846ebd5f084d4fd8b3c0ab569dda506db7e83704dfb53aa044e3d85befc72a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726179885887874029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a2758ab799d806f1782008297e8c44,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b,PodSandboxId:2e90396064c68d066f53ea8eaca7f7b5b0b611cf98763ee1d4626f24d68ea1ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726179885865309742,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dcda561f841c49b92bb743541540a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943,PodSandboxId:0df074f42ec7d9de8e45f22f1abe16013c51467aab40146a0bf5d5e546aca2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726179885834009473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-768483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ac9bad0f8b2f7ba888206420e7344f4,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54c849a4-75db-4886-9d65-f96471b3bd7b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	930d1eb00fcd2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   c6239dc721426       busybox-7dff88458-2jcd4
	aabf50e29ecef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   b603ca5480f2f       kindnet-tt4f9
	c17eceb01c7db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   756201f6d3b7a       coredns-7c65d6cfc9-w278g
	d62b93bbc679b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   a00b3cfc40e62       storage-provisioner
	fc2e313e025e8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   016f43c033e89       kube-proxy-b2w9d
	2b127cdd9f72b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   7449a7ae76b79       kube-scheduler-multinode-768483
	f3473f46e4274       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   ff2bc6d006554       etcd-multinode-768483
	5d68e596667e0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   04fbbb040cb54       kube-apiserver-multinode-768483
	b83f7247201cd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   9bc9674b70411       kube-controller-manager-multinode-768483
	dbe4437099168       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   0d89618e7dc5c       busybox-7dff88458-2jcd4
	839ededeb42c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   04fac0aee67c0       coredns-7c65d6cfc9-w278g
	e7c17ba1a9c61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   66bc1a0adc24b       storage-provisioner
	804ba8843e877       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   b7e0e7dd96357       kube-proxy-b2w9d
	843730a4cdb96       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   1e9212d7a6491       kindnet-tt4f9
	6505c2c378ff7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   efb782701ae2b       etcd-multinode-768483
	f24ee99de69ee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   5846ebd5f084d       kube-controller-manager-multinode-768483
	c489f2027465c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   2e90396064c68       kube-apiserver-multinode-768483
	f0aae551b7315       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   0df074f42ec7d       kube-scheduler-multinode-768483
	
	
	==> coredns [839ededeb42c7fe56fe0af98d96e4b810825db084871453bdbf1e330f313f11b] <==
	[INFO] 10.244.1.2:59743 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640569s
	[INFO] 10.244.1.2:32830 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105275s
	[INFO] 10.244.1.2:41988 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064553s
	[INFO] 10.244.1.2:55407 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001061251s
	[INFO] 10.244.1.2:48895 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123349s
	[INFO] 10.244.1.2:50858 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062737s
	[INFO] 10.244.1.2:43375 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117987s
	[INFO] 10.244.0.3:48213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087894s
	[INFO] 10.244.0.3:34262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064363s
	[INFO] 10.244.0.3:35462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055507s
	[INFO] 10.244.0.3:42971 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038951s
	[INFO] 10.244.1.2:41497 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115935s
	[INFO] 10.244.1.2:48860 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079415s
	[INFO] 10.244.1.2:46246 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067749s
	[INFO] 10.244.1.2:45271 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113036s
	[INFO] 10.244.0.3:45433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201166s
	[INFO] 10.244.0.3:50895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020706s
	[INFO] 10.244.0.3:41793 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193207s
	[INFO] 10.244.0.3:57569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00009896s
	[INFO] 10.244.1.2:55627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000318814s
	[INFO] 10.244.1.2:55647 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118089s
	[INFO] 10.244.1.2:45492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000235538s
	[INFO] 10.244.1.2:50200 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128568s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c17eceb01c7dbd954d6a09482161b6d885e552639def6c4e60de2348a5c97f4f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48371 - 29527 "HINFO IN 8955004022018942478.7541837519683124185. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009838992s
	
	
	==> describe nodes <==
	Name:               multinode-768483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-768483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=multinode-768483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_24_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:24:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-768483
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:35:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:24:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:31:37 +0000   Thu, 12 Sep 2024 22:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    multinode-768483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3335155d761542d493be0a366578b8a5
	  System UUID:                3335155d-7615-42d4-93be-0a366578b8a5
	  Boot ID:                    fb2d6d38-d168-4770-8b0c-5984543b5d6d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2jcd4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-7c65d6cfc9-w278g                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-768483                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-tt4f9                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-768483             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-768483    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-b2w9d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-768483             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-768483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-768483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-768483 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-768483 event: Registered Node multinode-768483 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-768483 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-768483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-768483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-768483 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-768483 event: Registered Node multinode-768483 in Controller
	
	
	Name:               multinode-768483-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-768483-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=multinode-768483
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_12T22_32_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:32:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-768483-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:33:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:34:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:34:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:34:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Sep 2024 22:32:49 +0000   Thu, 12 Sep 2024 22:34:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    multinode-768483-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 603d35bbfc7d4cebbc8046ff6b53473e
	  System UUID:                603d35bb-fc7d-4ceb-bc80-46ff6b53473e
	  Boot ID:                    dffccd93-f8ba-4a53-a12f-fc6950a8098a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5ssl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-x4s75              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-75v26           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 9m59s)  kubelet          Node multinode-768483-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 9m59s)  kubelet          Node multinode-768483-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 9m59s)  kubelet          Node multinode-768483-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-768483-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-768483-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-768483-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-768483-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-768483-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-768483-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053585] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.180018] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.117938] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.263596] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.773036] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.385091] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.057611] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.980668] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.086782] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.076054] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.128788] kauditd_printk_skb: 18 callbacks suppressed
	[Sep12 22:25] kauditd_printk_skb: 69 callbacks suppressed
	[Sep12 22:26] kauditd_printk_skb: 14 callbacks suppressed
	[Sep12 22:31] systemd-fstab-generator[2636]: Ignoring "noauto" option for root device
	[  +0.159317] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +0.174713] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.139241] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.268406] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +8.777049] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.081529] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.699709] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +4.623363] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.193181] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.724662] systemd-fstab-generator[3776]: Ignoring "noauto" option for root device
	[Sep12 22:32] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [6505c2c378ff70fae34c9f006c44d5dc7e4ffd9480237e82899d87e8c8161693] <==
	{"level":"warn","ts":"2024-09-12T22:25:45.765935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.058737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-768483-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-12T22:25:45.766018Z","caller":"traceutil/trace.go:171","msg":"trace[161465610] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"437.665438ms","start":"2024-09-12T22:25:45.328339Z","end":"2024-09-12T22:25:45.766005Z","steps":["trace[161465610] 'process raft request'  (duration: 179.937854ms)","trace[161465610] 'compare'  (duration: 256.629504ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T22:25:45.765736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.460896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-x4s75\" ","response":"range_response_count:1 size:3703"}
	{"level":"info","ts":"2024-09-12T22:25:45.768808Z","caller":"traceutil/trace.go:171","msg":"trace[1693125820] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-x4s75; range_end:; response_count:1; response_revision:465; }","duration":"335.517706ms","start":"2024-09-12T22:25:45.433265Z","end":"2024-09-12T22:25:45.768783Z","steps":["trace[1693125820] 'agreement among raft nodes before linearized reading'  (duration: 332.434065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:25:45.770730Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:25:45.433234Z","time spent":"337.47928ms","remote":"127.0.0.1:57390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3726,"request content":"key:\"/registry/pods/kube-system/kindnet-x4s75\" "}
	{"level":"info","ts":"2024-09-12T22:25:45.768889Z","caller":"traceutil/trace.go:171","msg":"trace[2023500463] range","detail":"{range_begin:/registry/minions/multinode-768483-m02; range_end:; response_count:1; response_revision:465; }","duration":"284.007218ms","start":"2024-09-12T22:25:45.484873Z","end":"2024-09-12T22:25:45.768881Z","steps":["trace[2023500463] 'agreement among raft nodes before linearized reading'  (duration: 281.045215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:25:45.769040Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:25:45.328321Z","time spent":"440.677208ms","remote":"127.0.0.1:57382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2879,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-768483-m02\" mod_revision:463 > success:<request_put:<key:\"/registry/minions/multinode-768483-m02\" value_size:2833 >> failure:<request_range:<key:\"/registry/minions/multinode-768483-m02\" > >"}
	{"level":"info","ts":"2024-09-12T22:25:45.910075Z","caller":"traceutil/trace.go:171","msg":"trace[1023654244] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"134.941312ms","start":"2024-09-12T22:25:45.775114Z","end":"2024-09-12T22:25:45.910055Z","steps":["trace[1023654244] 'process raft request'  (duration: 134.465985ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:25:45.910485Z","caller":"traceutil/trace.go:171","msg":"trace[1478652559] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"128.33284ms","start":"2024-09-12T22:25:45.782142Z","end":"2024-09-12T22:25:45.910475Z","steps":["trace[1478652559] 'process raft request'  (duration: 127.610679ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:25:52.689286Z","caller":"traceutil/trace.go:171","msg":"trace[1220903943] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"105.000507ms","start":"2024-09-12T22:25:52.584268Z","end":"2024-09-12T22:25:52.689269Z","steps":["trace[1220903943] 'process raft request'  (duration: 104.902744ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:26:37.836533Z","caller":"traceutil/trace.go:171","msg":"trace[370385528] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"127.60833ms","start":"2024-09-12T22:26:37.708897Z","end":"2024-09-12T22:26:37.836505Z","steps":["trace[370385528] 'read index received'  (duration: 44.20338ms)","trace[370385528] 'applied index is now lower than readState.Index'  (duration: 83.404334ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T22:26:37.836723Z","caller":"traceutil/trace.go:171","msg":"trace[1425116809] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"161.672594ms","start":"2024-09-12T22:26:37.675038Z","end":"2024-09-12T22:26:37.836711Z","steps":["trace[1425116809] 'process raft request'  (duration: 78.131529ms)","trace[1425116809] 'compare'  (duration: 83.220543ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T22:26:37.836932Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.999222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-768483-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T22:26:37.837006Z","caller":"traceutil/trace.go:171","msg":"trace[1703510972] range","detail":"{range_begin:/registry/minions/multinode-768483-m03; range_end:; response_count:0; response_revision:572; }","duration":"128.106787ms","start":"2024-09-12T22:26:37.708893Z","end":"2024-09-12T22:26:37.836999Z","steps":["trace[1703510972] 'agreement among raft nodes before linearized reading'  (duration: 127.954145ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:27:34.672142Z","caller":"traceutil/trace.go:171","msg":"trace[1773283161] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"200.403014ms","start":"2024-09-12T22:27:34.471712Z","end":"2024-09-12T22:27:34.672115Z","steps":["trace[1773283161] 'process raft request'  (duration: 199.977275ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:29:51.439017Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-12T22:29:51.439121Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-768483","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.28:2380"],"advertise-client-urls":["https://192.168.39.28:2379"]}
	{"level":"warn","ts":"2024-09-12T22:29:51.439205Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.439333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.523565Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.28:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:29:51.523639Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.28:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T22:29:51.523798Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2fa11d851b98b853","current-leader-member-id":"2fa11d851b98b853"}
	{"level":"info","ts":"2024-09-12T22:29:51.526265Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2024-09-12T22:29:51.526365Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2024-09-12T22:29:51.526389Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-768483","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.28:2380"],"advertise-client-urls":["https://192.168.39.28:2379"]}
	
	
	==> etcd [f3473f46e4274f525159872fdc01fb5c1a5b9503ad68c9a35390e3220e05ca47] <==
	{"level":"info","ts":"2024-09-12T22:31:35.199039Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:31:35.199693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:31:35.185922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 switched to configuration voters=(3432056848563877971)"}
	{"level":"info","ts":"2024-09-12T22:31:35.185091Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-12T22:31:35.217072Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","added-peer-id":"2fa11d851b98b853","added-peer-peer-urls":["https://192.168.39.28:2380"]}
	{"level":"info","ts":"2024-09-12T22:31:35.217486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:31:35.226922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:31:36.245711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgPreVoteResp from 2fa11d851b98b853 at term 2"}
	{"level":"info","ts":"2024-09-12T22:31:36.245913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became candidate at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgVoteResp from 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became leader at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.245997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2fa11d851b98b853 elected leader 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2024-09-12T22:31:36.250719Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2fa11d851b98b853","local-member-attributes":"{Name:multinode-768483 ClientURLs:[https://192.168.39.28:2379]}","request-path":"/0/members/2fa11d851b98b853/attributes","cluster-id":"8fc02aca6c76ee1e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T22:31:36.251433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:31:36.251700Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:31:36.256391Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:31:36.259442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T22:31:36.264332Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:31:36.267349Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.28:2379"}
	{"level":"info","ts":"2024-09-12T22:31:36.270689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T22:31:36.270729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T22:33:00.769928Z","caller":"traceutil/trace.go:171","msg":"trace[69052628] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"107.661581ms","start":"2024-09-12T22:33:00.662232Z","end":"2024-09-12T22:33:00.769893Z","steps":["trace[69052628] 'process raft request'  (duration: 107.543545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:33:04.444060Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.681872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13282120153224572176 > lease_revoke:<id:385391e85d01ec5e>","response":"size:28"}
	
	
	==> kernel <==
	 22:35:42 up 11 min,  0 users,  load average: 0.41, 0.19, 0.11
	Linux multinode-768483 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [843730a4cdb964ae88e322e3da7b4037f1e64f5a4948be394cefb651ceb02679] <==
	I0912 22:29:08.143424       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:18.141184       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:18.141230       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:18.141416       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:18.141442       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:18.141511       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:18.141532       1 main.go:299] handling current node
	I0912 22:29:28.142891       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:28.143056       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:28.143223       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:28.143247       1 main.go:299] handling current node
	I0912 22:29:28.143279       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:28.143296       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:38.139432       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:38.139569       1 main.go:299] handling current node
	I0912 22:29:38.139601       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:38.139620       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:38.139875       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:38.139923       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	I0912 22:29:48.142840       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:29:48.142910       1 main.go:299] handling current node
	I0912 22:29:48.142941       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:29:48.142947       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:29:48.143077       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0912 22:29:48.143082       1 main.go:322] Node multinode-768483-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aabf50e29ecef4fe750319f2168330d8818b650a87fafdb92a07495f86e5c5ba] <==
	I0912 22:34:39.738163       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:34:49.745237       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:34:49.745374       1 main.go:299] handling current node
	I0912 22:34:49.745462       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:34:49.745504       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:34:59.746448       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:34:59.746495       1 main.go:299] handling current node
	I0912 22:34:59.746516       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:34:59.746523       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:35:09.737824       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:35:09.737855       1 main.go:299] handling current node
	I0912 22:35:09.737868       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:35:09.737872       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:35:19.745453       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:35:19.745502       1 main.go:299] handling current node
	I0912 22:35:19.745516       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:35:19.745521       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:35:29.737956       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:35:29.739041       1 main.go:299] handling current node
	I0912 22:35:29.739075       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:35:29.739085       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	I0912 22:35:39.738266       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0912 22:35:39.738443       1 main.go:299] handling current node
	I0912 22:35:39.738489       1 main.go:295] Handling node with IPs: map[192.168.39.230:{}]
	I0912 22:35:39.738509       1 main.go:322] Node multinode-768483-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5d68e596667e0da00d1983ac59c09742c64f760660d9c346c97fbfe656dfca97] <==
	I0912 22:31:37.825526       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0912 22:31:37.831960       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 22:31:37.831988       1 policy_source.go:224] refreshing policies
	I0912 22:31:37.840528       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:31:37.840745       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:31:37.845040       1 shared_informer.go:320] Caches are synced for configmaps
	I0912 22:31:37.845779       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0912 22:31:37.845857       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0912 22:31:37.846443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0912 22:31:37.846497       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0912 22:31:37.855552       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0912 22:31:37.863921       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 22:31:37.864815       1 aggregator.go:171] initial CRD sync complete...
	I0912 22:31:37.864844       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 22:31:37.864851       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:31:37.864857       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:31:37.909215       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0912 22:31:38.754369       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:31:40.168129       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 22:31:40.283498       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 22:31:40.300972       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 22:31:40.381219       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:31:40.389987       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 22:31:41.356884       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 22:31:41.550822       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c489f2027465c018d7eac2e25eeaae7802e0ff1176c5691d3f69ddf1bf4b947b] <==
	W0912 22:29:51.460816       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.460901       1 logging.go:55] [core] [Channel #6 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.460956       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461334       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461447       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461797       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.461968       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463549       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463729       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.463933       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464002       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464060       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464118       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464174       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464223       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464270       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464324       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.464438       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466118       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466465       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.466870       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.467600       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.471952       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.472059       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:29:51.472131       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b83f7247201cd05701c223ccb523fed94c6147f010245105f1f321b4519a6f58] <==
	I0912 22:32:57.435118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:32:57.461434       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-768483-m03" podCIDRs=["10.244.2.0/24"]
	I0912 22:32:57.461472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:57.461495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:57.826157       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:32:58.181798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:01.539009       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:07.819145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.114723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.114839       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:33:16.125086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:16.451322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:20.821257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:20.836403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:21.284549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:33:21.284599       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:34:01.348713       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zmnq6"
	I0912 22:34:01.378463       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zmnq6"
	I0912 22:34:01.378604       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2p9pp"
	I0912 22:34:01.406006       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2p9pp"
	I0912 22:34:01.469319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:34:01.487741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:34:01.490731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.652916ms"
	I0912 22:34:01.490915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="106.006µs"
	I0912 22:34:06.559961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	
	
	==> kube-controller-manager [f24ee99de69eefbc84e7df7bc3eea3428a8844074a499bc601e3ded4bb4e9510] <==
	I0912 22:27:25.624163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:25.624284       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:27:26.658241       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-768483-m03\" does not exist"
	I0912 22:27:26.659052       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:27:26.674567       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-768483-m03" podCIDRs=["10.244.3.0/24"]
	I0912 22:27:26.674604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:26.675851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:26.682583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:27.137102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:27.487838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:30.438446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:36.906360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:46.008915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:46.009103       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m03"
	I0912 22:27:46.021511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:27:50.374248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.390516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.392299       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-768483-m02"
	I0912 22:28:30.394553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:30.416355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:30.416538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	I0912 22:28:30.462398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.563805ms"
	I0912 22:28:30.462471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.841µs"
	I0912 22:28:35.538583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m02"
	I0912 22:28:45.615696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-768483-m03"
	
	
	==> kube-proxy [804ba8843e87765fc62adc0cfcd7000f8c06a2c98b9c7396a913ff6a5f930a1c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:24:57.879896       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:24:57.895620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.28"]
	E0912 22:24:57.895785       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:24:57.926877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:24:57.926923       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:24:57.926946       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:24:57.929564       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:24:57.929955       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:24:57.930012       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:24:57.931358       1 config.go:199] "Starting service config controller"
	I0912 22:24:57.931417       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:24:57.931460       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:24:57.931477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:24:57.936383       1 config.go:328] "Starting node config controller"
	I0912 22:24:57.936409       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:24:58.032486       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:24:58.032622       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:24:58.036936       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e313e025e87acfc620ea53bd1ce094d12d54fc15b58cebe8a8d77908b5759] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:31:38.987278       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:31:39.008402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.28"]
	E0912 22:31:39.008488       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:31:39.072632       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:31:39.072761       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:31:39.072789       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:31:39.076346       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:31:39.076617       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:31:39.076694       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:31:39.078392       1 config.go:199] "Starting service config controller"
	I0912 22:31:39.078438       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:31:39.078479       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:31:39.078484       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:31:39.079359       1 config.go:328] "Starting node config controller"
	I0912 22:31:39.079383       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:31:39.179300       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:31:39.179364       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:31:39.179616       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b127cdd9f72b89e5289c96eebf5d02acc071ed5ee9e73360d2757c2c3e35873] <==
	I0912 22:31:36.026564       1 serving.go:386] Generated self-signed cert in-memory
	W0912 22:31:37.806879       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 22:31:37.806960       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 22:31:37.806970       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 22:31:37.806985       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 22:31:37.853581       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 22:31:37.853623       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:31:37.857409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 22:31:37.857570       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 22:31:37.857804       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 22:31:37.857902       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 22:31:37.958191       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f0aae551b7315d864d4e52b385c6d09427fcdc78d4ec5a0b5e854363d2131943] <==
	E0912 22:24:48.620854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.620999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:24:48.621036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.621094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:24:48.621128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:48.621112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:24:48.621202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.543091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:24:49.543138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.654323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 22:24:49.654383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.660413       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:24:49.660456       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 22:24:49.778825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 22:24:49.778881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.876869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:24:49.876926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.879275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:24:49.882042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.882417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 22:24:49.882495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:24:49.918149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 22:24:49.918215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0912 22:24:52.215351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 22:29:51.435110       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 12 22:34:24 multinode-768483 kubelet[2930]: E0912 22:34:24.248404    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180464247768636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:34 multinode-768483 kubelet[2930]: E0912 22:34:34.161565    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:34:34 multinode-768483 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:34:34 multinode-768483 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:34:34 multinode-768483 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:34:34 multinode-768483 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:34:34 multinode-768483 kubelet[2930]: E0912 22:34:34.251052    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180474249955316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:34 multinode-768483 kubelet[2930]: E0912 22:34:34.251128    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180474249955316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:44 multinode-768483 kubelet[2930]: E0912 22:34:44.253161    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180484252076659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:44 multinode-768483 kubelet[2930]: E0912 22:34:44.253539    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180484252076659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:54 multinode-768483 kubelet[2930]: E0912 22:34:54.254883    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180494254556298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:34:54 multinode-768483 kubelet[2930]: E0912 22:34:54.255173    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180494254556298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:04 multinode-768483 kubelet[2930]: E0912 22:35:04.256764    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180504256303559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:04 multinode-768483 kubelet[2930]: E0912 22:35:04.257083    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180504256303559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:14 multinode-768483 kubelet[2930]: E0912 22:35:14.258847    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180514257995841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:14 multinode-768483 kubelet[2930]: E0912 22:35:14.259420    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180514257995841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:24 multinode-768483 kubelet[2930]: E0912 22:35:24.261281    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180524260514518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:24 multinode-768483 kubelet[2930]: E0912 22:35:24.261314    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180524260514518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:34 multinode-768483 kubelet[2930]: E0912 22:35:34.157871    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 22:35:34 multinode-768483 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 22:35:34 multinode-768483 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 22:35:34 multinode-768483 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 22:35:34 multinode-768483 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 22:35:34 multinode-768483 kubelet[2930]: E0912 22:35:34.263703    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180534262947014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 22:35:34 multinode-768483 kubelet[2930]: E0912 22:35:34.264182    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726180534262947014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:35:41.886119   46079 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19616-5891/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-768483 -n multinode-768483
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-768483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.21s)

                                                
                                    
x
+
TestPreload (272.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-099591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0912 22:40:05.703693   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-099591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.831766928s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-099591 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-099591 image pull gcr.io/k8s-minikube/busybox: (3.235664818s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-099591
E0912 22:42:07.199545   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-099591: exit status 82 (2m0.464173967s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-099591"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-099591 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-09-12 22:43:39.8150646 +0000 UTC m=+4486.663447516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-099591 -n test-preload-099591
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-099591 -n test-preload-099591: exit status 3 (18.64622389s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:43:58.457959   48993 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E0912 22:43:58.457990   48993 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-099591" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-099591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-099591
--- FAIL: TestPreload (272.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (406.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.882964184s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-848420] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-848420" primary control-plane node in "kubernetes-upgrade-848420" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:45:49.598736   50078 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:45:49.598991   50078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:45:49.599000   50078 out.go:358] Setting ErrFile to fd 2...
	I0912 22:45:49.599006   50078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:45:49.599240   50078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:45:49.599863   50078 out.go:352] Setting JSON to false
	I0912 22:45:49.600782   50078 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5292,"bootTime":1726175858,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:45:49.600841   50078 start.go:139] virtualization: kvm guest
	I0912 22:45:49.603291   50078 out.go:177] * [kubernetes-upgrade-848420] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:45:49.604653   50078 notify.go:220] Checking for updates...
	I0912 22:45:49.604729   50078 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:45:49.606797   50078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:45:49.608296   50078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:45:49.609746   50078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:45:49.611288   50078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:45:49.613082   50078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:45:49.614534   50078 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:45:49.649322   50078 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 22:45:49.650270   50078 start.go:297] selected driver: kvm2
	I0912 22:45:49.650283   50078 start.go:901] validating driver "kvm2" against <nil>
	I0912 22:45:49.650295   50078 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:45:49.650996   50078 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:45:49.651085   50078 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:45:49.666214   50078 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:45:49.666277   50078 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:45:49.666548   50078 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 22:45:49.666579   50078 cni.go:84] Creating CNI manager for ""
	I0912 22:45:49.666596   50078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:45:49.666609   50078 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 22:45:49.666664   50078 start.go:340] cluster config:
	{Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:45:49.666780   50078 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:45:49.668527   50078 out.go:177] * Starting "kubernetes-upgrade-848420" primary control-plane node in "kubernetes-upgrade-848420" cluster
	I0912 22:45:49.669508   50078 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 22:45:49.669538   50078 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0912 22:45:49.669546   50078 cache.go:56] Caching tarball of preloaded images
	I0912 22:45:49.669635   50078 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:45:49.669648   50078 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0912 22:45:49.669946   50078 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/config.json ...
	I0912 22:45:49.669966   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/config.json: {Name:mkd6175a45e8bfc445f6e26b00d9b025344bcc80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:45:49.670089   50078 start.go:360] acquireMachinesLock for kubernetes-upgrade-848420: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:45:49.670122   50078 start.go:364] duration metric: took 17.654µs to acquireMachinesLock for "kubernetes-upgrade-848420"
	I0912 22:45:49.670134   50078 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:45:49.670181   50078 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 22:45:49.671519   50078 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 22:45:49.671629   50078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:45:49.671672   50078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:45:49.686721   50078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0912 22:45:49.687175   50078 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:45:49.687815   50078 main.go:141] libmachine: Using API Version  1
	I0912 22:45:49.687831   50078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:45:49.688208   50078 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:45:49.688453   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:45:49.688629   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:45:49.688820   50078 start.go:159] libmachine.API.Create for "kubernetes-upgrade-848420" (driver="kvm2")
	I0912 22:45:49.688850   50078 client.go:168] LocalClient.Create starting
	I0912 22:45:49.688878   50078 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 22:45:49.688908   50078 main.go:141] libmachine: Decoding PEM data...
	I0912 22:45:49.688923   50078 main.go:141] libmachine: Parsing certificate...
	I0912 22:45:49.688985   50078 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 22:45:49.689008   50078 main.go:141] libmachine: Decoding PEM data...
	I0912 22:45:49.689028   50078 main.go:141] libmachine: Parsing certificate...
	I0912 22:45:49.689054   50078 main.go:141] libmachine: Running pre-create checks...
	I0912 22:45:49.689070   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .PreCreateCheck
	I0912 22:45:49.689451   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetConfigRaw
	I0912 22:45:49.689835   50078 main.go:141] libmachine: Creating machine...
	I0912 22:45:49.689850   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .Create
	I0912 22:45:49.689970   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Creating KVM machine...
	I0912 22:45:49.691112   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found existing default KVM network
	I0912 22:45:49.692038   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:49.691871   50115 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0912 22:45:49.692076   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | created network xml: 
	I0912 22:45:49.692092   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | <network>
	I0912 22:45:49.692112   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   <name>mk-kubernetes-upgrade-848420</name>
	I0912 22:45:49.692128   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   <dns enable='no'/>
	I0912 22:45:49.692136   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   
	I0912 22:45:49.692144   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 22:45:49.692153   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |     <dhcp>
	I0912 22:45:49.692161   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 22:45:49.692169   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |     </dhcp>
	I0912 22:45:49.692178   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   </ip>
	I0912 22:45:49.692229   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG |   
	I0912 22:45:49.692241   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | </network>
	I0912 22:45:49.692256   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | 
	I0912 22:45:49.696967   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | trying to create private KVM network mk-kubernetes-upgrade-848420 192.168.39.0/24...
	I0912 22:45:49.764644   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | private KVM network mk-kubernetes-upgrade-848420 192.168.39.0/24 created
	I0912 22:45:49.764675   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420 ...
	I0912 22:45:49.764692   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 22:45:49.764717   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:49.764631   50115 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:45:49.764733   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 22:45:50.013683   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:50.013495   50115 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa...
	I0912 22:45:50.098826   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:50.098657   50115 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/kubernetes-upgrade-848420.rawdisk...
	I0912 22:45:50.098855   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Writing magic tar header
	I0912 22:45:50.098872   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Writing SSH key tar header
	I0912 22:45:50.098888   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:50.098773   50115 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420 ...
	I0912 22:45:50.098902   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420 (perms=drwx------)
	I0912 22:45:50.098914   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420
	I0912 22:45:50.098930   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 22:45:50.098941   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:45:50.098954   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 22:45:50.098964   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 22:45:50.098980   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home/jenkins
	I0912 22:45:50.098994   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Checking permissions on dir: /home
	I0912 22:45:50.099008   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 22:45:50.099026   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 22:45:50.099040   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 22:45:50.099055   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 22:45:50.099068   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 22:45:50.099078   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Skipping /home - not owner
	I0912 22:45:50.099090   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Creating domain...
	I0912 22:45:50.100121   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) define libvirt domain using xml: 
	I0912 22:45:50.100150   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) <domain type='kvm'>
	I0912 22:45:50.100163   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <name>kubernetes-upgrade-848420</name>
	I0912 22:45:50.100171   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <memory unit='MiB'>2200</memory>
	I0912 22:45:50.100181   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <vcpu>2</vcpu>
	I0912 22:45:50.100212   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <features>
	I0912 22:45:50.100240   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <acpi/>
	I0912 22:45:50.100253   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <apic/>
	I0912 22:45:50.100260   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <pae/>
	I0912 22:45:50.100271   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     
	I0912 22:45:50.100277   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   </features>
	I0912 22:45:50.100283   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <cpu mode='host-passthrough'>
	I0912 22:45:50.100288   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   
	I0912 22:45:50.100292   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   </cpu>
	I0912 22:45:50.100298   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <os>
	I0912 22:45:50.100305   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <type>hvm</type>
	I0912 22:45:50.100311   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <boot dev='cdrom'/>
	I0912 22:45:50.100315   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <boot dev='hd'/>
	I0912 22:45:50.100322   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <bootmenu enable='no'/>
	I0912 22:45:50.100332   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   </os>
	I0912 22:45:50.100337   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   <devices>
	I0912 22:45:50.100345   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <disk type='file' device='cdrom'>
	I0912 22:45:50.100376   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/boot2docker.iso'/>
	I0912 22:45:50.100388   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <target dev='hdc' bus='scsi'/>
	I0912 22:45:50.100394   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <readonly/>
	I0912 22:45:50.100399   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </disk>
	I0912 22:45:50.100407   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <disk type='file' device='disk'>
	I0912 22:45:50.100413   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 22:45:50.100424   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/kubernetes-upgrade-848420.rawdisk'/>
	I0912 22:45:50.100431   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <target dev='hda' bus='virtio'/>
	I0912 22:45:50.100437   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </disk>
	I0912 22:45:50.100444   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <interface type='network'>
	I0912 22:45:50.100450   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <source network='mk-kubernetes-upgrade-848420'/>
	I0912 22:45:50.100457   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <model type='virtio'/>
	I0912 22:45:50.100462   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </interface>
	I0912 22:45:50.100469   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <interface type='network'>
	I0912 22:45:50.100475   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <source network='default'/>
	I0912 22:45:50.100480   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <model type='virtio'/>
	I0912 22:45:50.100534   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </interface>
	I0912 22:45:50.100586   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <serial type='pty'>
	I0912 22:45:50.100608   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <target port='0'/>
	I0912 22:45:50.100622   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </serial>
	I0912 22:45:50.100638   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <console type='pty'>
	I0912 22:45:50.100651   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <target type='serial' port='0'/>
	I0912 22:45:50.100672   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </console>
	I0912 22:45:50.100680   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     <rng model='virtio'>
	I0912 22:45:50.100687   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)       <backend model='random'>/dev/random</backend>
	I0912 22:45:50.100691   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     </rng>
	I0912 22:45:50.100699   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     
	I0912 22:45:50.100704   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)     
	I0912 22:45:50.100712   50078 main.go:141] libmachine: (kubernetes-upgrade-848420)   </devices>
	I0912 22:45:50.100722   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) </domain>
	I0912 22:45:50.100740   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) 
	I0912 22:45:50.104840   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:c5:81:65 in network default
	I0912 22:45:50.105504   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Ensuring networks are active...
	I0912 22:45:50.105529   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:50.106301   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Ensuring network default is active
	I0912 22:45:50.106654   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Ensuring network mk-kubernetes-upgrade-848420 is active
	I0912 22:45:50.107224   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Getting domain xml...
	I0912 22:45:50.107947   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Creating domain...
	I0912 22:45:51.433899   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Waiting to get IP...
	I0912 22:45:51.434775   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.435126   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.435157   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:51.435104   50115 retry.go:31] will retry after 278.361301ms: waiting for machine to come up
	I0912 22:45:51.715723   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.716144   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.716173   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:51.716085   50115 retry.go:31] will retry after 240.454167ms: waiting for machine to come up
	I0912 22:45:51.958422   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.958829   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:51.958859   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:51.958783   50115 retry.go:31] will retry after 432.368144ms: waiting for machine to come up
	I0912 22:45:52.392522   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:52.392960   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:52.392981   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:52.392920   50115 retry.go:31] will retry after 553.225823ms: waiting for machine to come up
	I0912 22:45:52.947357   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:52.947822   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:52.947871   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:52.947783   50115 retry.go:31] will retry after 580.110859ms: waiting for machine to come up
	I0912 22:45:53.529912   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:53.530404   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:53.530463   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:53.530358   50115 retry.go:31] will retry after 935.814788ms: waiting for machine to come up
	I0912 22:45:54.467451   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:54.467838   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:54.467869   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:54.467777   50115 retry.go:31] will retry after 862.270402ms: waiting for machine to come up
	I0912 22:45:55.331731   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:55.332113   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:55.332142   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:55.332052   50115 retry.go:31] will retry after 1.041743931s: waiting for machine to come up
	I0912 22:45:56.375761   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:56.376164   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:56.376196   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:56.376082   50115 retry.go:31] will retry after 1.614090407s: waiting for machine to come up
	I0912 22:45:57.991989   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:57.992463   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:57.992493   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:57.992415   50115 retry.go:31] will retry after 1.471556467s: waiting for machine to come up
	I0912 22:45:59.466063   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:45:59.466478   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:45:59.466514   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:45:59.466434   50115 retry.go:31] will retry after 1.947667576s: waiting for machine to come up
	I0912 22:46:01.415189   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:01.415597   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:46:01.415646   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:46:01.415562   50115 retry.go:31] will retry after 2.326209452s: waiting for machine to come up
	I0912 22:46:03.742900   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:03.743344   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:46:03.743369   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:46:03.743298   50115 retry.go:31] will retry after 3.733252713s: waiting for machine to come up
	I0912 22:46:07.478734   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:07.479380   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find current IP address of domain kubernetes-upgrade-848420 in network mk-kubernetes-upgrade-848420
	I0912 22:46:07.479408   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | I0912 22:46:07.479340   50115 retry.go:31] will retry after 5.15932104s: waiting for machine to come up
	I0912 22:46:12.639752   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:12.640265   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Found IP for machine: 192.168.39.110
	I0912 22:46:12.640296   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has current primary IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:12.640326   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Reserving static IP address...
	I0912 22:46:12.640629   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-848420", mac: "52:54:00:8c:a1:6b", ip: "192.168.39.110"} in network mk-kubernetes-upgrade-848420
	I0912 22:46:12.716057   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Getting to WaitForSSH function...
	I0912 22:46:12.716082   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Reserved static IP address: 192.168.39.110
	I0912 22:46:12.716094   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Waiting for SSH to be available...
	I0912 22:46:12.719341   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:12.720282   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420
	I0912 22:46:12.720321   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-848420 interface with MAC address 52:54:00:8c:a1:6b
	I0912 22:46:12.720433   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Using SSH client type: external
	I0912 22:46:12.720475   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa (-rw-------)
	I0912 22:46:12.720506   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 22:46:12.720519   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | About to run SSH command:
	I0912 22:46:12.720534   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | exit 0
	I0912 22:46:12.724006   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | SSH cmd err, output: exit status 255: 
	I0912 22:46:12.724033   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0912 22:46:12.724045   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | command : exit 0
	I0912 22:46:12.724061   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | err     : exit status 255
	I0912 22:46:12.724098   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | output  : 
	I0912 22:46:15.724255   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Getting to WaitForSSH function...
	I0912 22:46:15.726787   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.727233   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:15.727260   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.727412   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Using SSH client type: external
	I0912 22:46:15.727438   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa (-rw-------)
	I0912 22:46:15.727473   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 22:46:15.727498   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | About to run SSH command:
	I0912 22:46:15.727523   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | exit 0
	I0912 22:46:15.853426   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | SSH cmd err, output: <nil>: 
	I0912 22:46:15.853769   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) KVM machine creation complete!
	I0912 22:46:15.854044   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetConfigRaw
	I0912 22:46:15.854601   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:15.854795   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:15.854933   50078 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 22:46:15.854944   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetState
	I0912 22:46:15.856115   50078 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 22:46:15.856130   50078 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 22:46:15.856149   50078 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 22:46:15.856156   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:15.858860   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.859228   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:15.859257   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.859388   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:15.859547   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:15.859688   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:15.859837   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:15.859999   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:15.860270   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:15.860285   50078 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 22:46:15.972920   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:46:15.972953   50078 main.go:141] libmachine: Detecting the provisioner...
	I0912 22:46:15.972966   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:15.975859   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.976210   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:15.976242   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:15.976365   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:15.976560   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:15.976688   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:15.976832   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:15.977013   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:15.977220   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:15.977233   50078 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 22:46:16.089977   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 22:46:16.090085   50078 main.go:141] libmachine: found compatible host: buildroot
	I0912 22:46:16.090105   50078 main.go:141] libmachine: Provisioning with buildroot...
	I0912 22:46:16.090119   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:46:16.090373   50078 buildroot.go:166] provisioning hostname "kubernetes-upgrade-848420"
	I0912 22:46:16.090398   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:46:16.090545   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.093201   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.093578   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.093629   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.093740   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:16.093912   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.094021   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.094162   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:16.094319   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:16.094557   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:16.094582   50078 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-848420 && echo "kubernetes-upgrade-848420" | sudo tee /etc/hostname
	I0912 22:46:16.219016   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-848420
	
	I0912 22:46:16.219044   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.221806   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.222168   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.222198   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.222341   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:16.222521   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.222665   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.222793   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:16.222931   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:16.223116   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:16.223139   50078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-848420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-848420/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-848420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:46:16.352635   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:46:16.352674   50078 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:46:16.352700   50078 buildroot.go:174] setting up certificates
	I0912 22:46:16.352716   50078 provision.go:84] configureAuth start
	I0912 22:46:16.352728   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:46:16.353019   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:46:16.355782   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.356149   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.356179   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.356303   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.358323   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.358603   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.358636   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.358742   50078 provision.go:143] copyHostCerts
	I0912 22:46:16.358815   50078 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:46:16.358833   50078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:46:16.358913   50078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:46:16.359050   50078 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:46:16.359064   50078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:46:16.359104   50078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:46:16.359201   50078 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:46:16.359212   50078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:46:16.359244   50078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:46:16.359329   50078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-848420 san=[127.0.0.1 192.168.39.110 kubernetes-upgrade-848420 localhost minikube]
	I0912 22:46:16.510491   50078 provision.go:177] copyRemoteCerts
	I0912 22:46:16.510547   50078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:46:16.510583   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.513098   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.513551   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.513582   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.513767   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:16.513971   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.514129   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:16.514248   50078 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:46:16.599733   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0912 22:46:16.622033   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 22:46:16.643933   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:46:16.666085   50078 provision.go:87] duration metric: took 313.354285ms to configureAuth
	I0912 22:46:16.666112   50078 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:46:16.666321   50078 config.go:182] Loaded profile config "kubernetes-upgrade-848420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 22:46:16.666429   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.669475   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.669854   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.669950   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.669996   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:16.670196   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.670367   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.670566   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:16.670729   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:16.670894   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:16.670910   50078 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:46:16.894456   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:46:16.894488   50078 main.go:141] libmachine: Checking connection to Docker...
	I0912 22:46:16.894500   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetURL
	I0912 22:46:16.895927   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | Using libvirt version 6000000
	I0912 22:46:16.898711   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.899227   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.899257   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.899486   50078 main.go:141] libmachine: Docker is up and running!
	I0912 22:46:16.899500   50078 main.go:141] libmachine: Reticulating splines...
	I0912 22:46:16.899506   50078 client.go:171] duration metric: took 27.210649561s to LocalClient.Create
	I0912 22:46:16.899528   50078 start.go:167] duration metric: took 27.210711311s to libmachine.API.Create "kubernetes-upgrade-848420"
	I0912 22:46:16.899537   50078 start.go:293] postStartSetup for "kubernetes-upgrade-848420" (driver="kvm2")
	I0912 22:46:16.899547   50078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:46:16.899563   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:16.899805   50078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:46:16.899828   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:16.902363   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.902734   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:16.902782   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:16.902906   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:16.903121   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:16.903276   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:16.903377   50078 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:46:16.988181   50078 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:46:16.992509   50078 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:46:16.992555   50078 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:46:16.992635   50078 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:46:16.992706   50078 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:46:16.992839   50078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:46:17.002480   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:46:17.025926   50078 start.go:296] duration metric: took 126.375647ms for postStartSetup
	I0912 22:46:17.025982   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetConfigRaw
	I0912 22:46:17.026570   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:46:17.029229   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.029528   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:17.029588   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.029784   50078 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/config.json ...
	I0912 22:46:17.029984   50078 start.go:128] duration metric: took 27.359794493s to createHost
	I0912 22:46:17.030008   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:17.032790   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.033270   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:17.033301   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.033535   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:17.033756   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:17.033928   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:17.034085   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:17.034243   50078 main.go:141] libmachine: Using SSH client type: native
	I0912 22:46:17.034408   50078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:46:17.034418   50078 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:46:17.146012   50078 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726181177.124429755
	
	I0912 22:46:17.146034   50078 fix.go:216] guest clock: 1726181177.124429755
	I0912 22:46:17.146044   50078 fix.go:229] Guest: 2024-09-12 22:46:17.124429755 +0000 UTC Remote: 2024-09-12 22:46:17.029996307 +0000 UTC m=+27.468301876 (delta=94.433448ms)
	I0912 22:46:17.146069   50078 fix.go:200] guest clock delta is within tolerance: 94.433448ms
	I0912 22:46:17.146075   50078 start.go:83] releasing machines lock for "kubernetes-upgrade-848420", held for 27.475946504s
	I0912 22:46:17.146097   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:17.146375   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:46:17.149223   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.149603   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:17.149646   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.149901   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:17.150395   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:17.150611   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:46:17.150711   50078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:46:17.150746   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:17.150879   50078 ssh_runner.go:195] Run: cat /version.json
	I0912 22:46:17.150904   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:46:17.153635   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.153706   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.153976   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:17.154017   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.154069   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:17.154088   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:17.154132   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:17.154293   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:46:17.154346   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:17.154481   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:46:17.154536   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:17.154716   50078 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:46:17.154798   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:46:17.154938   50078 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:46:17.239749   50078 ssh_runner.go:195] Run: systemctl --version
	I0912 22:46:17.268968   50078 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:46:17.426292   50078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 22:46:17.432411   50078 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:46:17.432487   50078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:46:17.448610   50078 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 22:46:17.448643   50078 start.go:495] detecting cgroup driver to use...
	I0912 22:46:17.448727   50078 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:46:17.465445   50078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:46:17.479837   50078 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:46:17.479898   50078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:46:17.494067   50078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:46:17.507868   50078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:46:17.627951   50078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:46:17.776198   50078 docker.go:233] disabling docker service ...
	I0912 22:46:17.776269   50078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:46:17.790209   50078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:46:17.802347   50078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:46:17.939983   50078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:46:18.085266   50078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:46:18.098748   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:46:18.116416   50078 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 22:46:18.116477   50078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:46:18.126234   50078 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:46:18.126310   50078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:46:18.136246   50078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:46:18.145905   50078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:46:18.155874   50078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:46:18.166978   50078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:46:18.176259   50078 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 22:46:18.176323   50078 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 22:46:18.189705   50078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:46:18.199757   50078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:46:18.322995   50078 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:46:18.413511   50078 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:46:18.413603   50078 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:46:18.418409   50078 start.go:563] Will wait 60s for crictl version
	I0912 22:46:18.418490   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:18.422058   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:46:18.459950   50078 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:46:18.460052   50078 ssh_runner.go:195] Run: crio --version
	I0912 22:46:18.486567   50078 ssh_runner.go:195] Run: crio --version
	I0912 22:46:18.514270   50078 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 22:46:18.515477   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:46:18.518352   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:18.518776   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:46:04 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:46:18.518809   50078 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:46:18.518980   50078 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:46:18.522902   50078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:46:18.535420   50078 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:46:18.535543   50078 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 22:46:18.535605   50078 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:46:18.571228   50078 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 22:46:18.571306   50078 ssh_runner.go:195] Run: which lz4
	I0912 22:46:18.575351   50078 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 22:46:18.579294   50078 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 22:46:18.579339   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 22:46:20.135699   50078 crio.go:462] duration metric: took 1.560390635s to copy over tarball
	I0912 22:46:20.135794   50078 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 22:46:22.719158   50078 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.583321949s)
	I0912 22:46:22.719190   50078 crio.go:469] duration metric: took 2.583453213s to extract the tarball
	I0912 22:46:22.719205   50078 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 22:46:22.762510   50078 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:46:22.813533   50078 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 22:46:22.813555   50078 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 22:46:22.813646   50078 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:46:22.813690   50078 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:22.813734   50078 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 22:46:22.813729   50078 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:22.813801   50078 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:22.813814   50078 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:22.813870   50078 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 22:46:22.813930   50078 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:22.815435   50078 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:22.815438   50078 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:22.815434   50078 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:22.815439   50078 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:22.815503   50078 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:22.815437   50078 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 22:46:22.815471   50078 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 22:46:22.815630   50078 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:46:23.052325   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:23.063067   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 22:46:23.088148   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:23.094597   50078 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 22:46:23.094647   50078 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:23.094698   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.098134   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:23.098975   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:23.112493   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:23.135526   50078 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 22:46:23.135586   50078 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 22:46:23.135635   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.143572   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 22:46:23.185357   50078 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 22:46:23.185405   50078 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:23.185460   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.185608   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:23.238650   50078 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 22:46:23.238702   50078 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:23.238736   50078 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 22:46:23.238753   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.238772   50078 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:23.238818   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.240813   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:46:23.240878   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:23.240842   50078 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 22:46:23.240923   50078 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 22:46:23.240950   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.240957   50078 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 22:46:23.240975   50078 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:23.241000   50078 ssh_runner.go:195] Run: which crictl
	I0912 22:46:23.275390   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:23.275422   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:23.275462   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:23.314623   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:46:23.329918   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:23.329943   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:46:23.329972   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:23.393726   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:23.393738   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:23.393816   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:46:23.491318   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:46:23.491409   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:23.491433   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:46:23.491487   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:46:23.531744   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:46:23.538781   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:46:23.538800   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 22:46:23.623313   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:46:23.623347   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 22:46:23.623416   50078 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:46:23.623424   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 22:46:23.637893   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 22:46:23.664031   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 22:46:23.690066   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 22:46:23.690113   50078 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 22:46:23.910948   50078 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:46:24.051228   50078 cache_images.go:92] duration metric: took 1.237649101s to LoadCachedImages
	W0912 22:46:24.051311   50078 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0912 22:46:24.051329   50078 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.20.0 crio true true} ...
	I0912 22:46:24.051461   50078 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-848420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:46:24.051551   50078 ssh_runner.go:195] Run: crio config
	I0912 22:46:24.099034   50078 cni.go:84] Creating CNI manager for ""
	I0912 22:46:24.099061   50078 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:46:24.099074   50078 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:46:24.099098   50078 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-848420 NodeName:kubernetes-upgrade-848420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 22:46:24.099276   50078 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-848420"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:46:24.099377   50078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 22:46:24.109808   50078 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:46:24.109891   50078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:46:24.119916   50078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0912 22:46:24.137576   50078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:46:24.155512   50078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0912 22:46:24.173178   50078 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0912 22:46:24.177299   50078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:46:24.191520   50078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:46:24.310154   50078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:46:24.337448   50078 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420 for IP: 192.168.39.110
	I0912 22:46:24.337470   50078 certs.go:194] generating shared ca certs ...
	I0912 22:46:24.337489   50078 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.337699   50078 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:46:24.337762   50078 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:46:24.337777   50078 certs.go:256] generating profile certs ...
	I0912 22:46:24.337869   50078 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.key
	I0912 22:46:24.337888   50078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.crt with IP's: []
	I0912 22:46:24.416851   50078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.crt ...
	I0912 22:46:24.416879   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.crt: {Name:mkd409102718a1ce8d3c4626c74a6e1ba7b39c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.417075   50078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.key ...
	I0912 22:46:24.417094   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.key: {Name:mk1460808f6a28390923dbaa4f2ee6460cfa6366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.417199   50078 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key.56f551ba
	I0912 22:46:24.417217   50078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt.56f551ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110]
	I0912 22:46:24.569645   50078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt.56f551ba ...
	I0912 22:46:24.569675   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt.56f551ba: {Name:mkd19063d7a9ba1cad398d783e82367f1f54102f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.569875   50078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key.56f551ba ...
	I0912 22:46:24.569895   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key.56f551ba: {Name:mkba31f4f0ccff860fe41884821077d8d96c7270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.569995   50078 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt.56f551ba -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt
	I0912 22:46:24.570111   50078 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key.56f551ba -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key
	I0912 22:46:24.570200   50078 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key
	I0912 22:46:24.570231   50078 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.crt with IP's: []
	I0912 22:46:24.731840   50078 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.crt ...
	I0912 22:46:24.731876   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.crt: {Name:mk396fdc068fb7e8c06ab297985eb94d0039ca75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.732231   50078 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key ...
	I0912 22:46:24.732261   50078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key: {Name:mkbf9eaf0d9e1fdd6e9bf24f39d47cb28511e41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:46:24.732521   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:46:24.732568   50078 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:46:24.732586   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:46:24.732622   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:46:24.732651   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:46:24.732682   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:46:24.732730   50078 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:46:24.733539   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:46:24.761194   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:46:24.786472   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:46:24.811733   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:46:24.836179   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0912 22:46:24.861455   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:46:24.887237   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:46:24.914156   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 22:46:24.938807   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:46:24.963999   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:46:24.988390   50078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:46:25.012863   50078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:46:25.029386   50078 ssh_runner.go:195] Run: openssl version
	I0912 22:46:25.035148   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:46:25.048319   50078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:46:25.053268   50078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:46:25.053336   50078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:46:25.059479   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:46:25.070840   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:46:25.082211   50078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:46:25.086880   50078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:46:25.086945   50078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:46:25.092591   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:46:25.103897   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:46:25.114571   50078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:46:25.119067   50078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:46:25.119127   50078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:46:25.125053   50078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:46:25.141216   50078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:46:25.147201   50078 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 22:46:25.147261   50078 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:46:25.147349   50078 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:46:25.147408   50078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:46:25.204298   50078 cri.go:89] found id: ""
	I0912 22:46:25.204382   50078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:46:25.214661   50078 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:46:25.225769   50078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:46:25.238084   50078 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:46:25.238108   50078 kubeadm.go:157] found existing configuration files:
	
	I0912 22:46:25.238162   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:46:25.249243   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 22:46:25.249333   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 22:46:25.259366   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:46:25.272679   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 22:46:25.272744   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 22:46:25.281899   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:46:25.290428   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 22:46:25.290491   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:46:25.299553   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:46:25.308239   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 22:46:25.308306   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:46:25.317219   50078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 22:46:25.573419   50078 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:48:23.272091   50078 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 22:48:23.272259   50078 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 22:48:23.275736   50078 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 22:48:23.275810   50078 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 22:48:23.275898   50078 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:48:23.276015   50078 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:48:23.276135   50078 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:48:23.276213   50078 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:48:23.312535   50078 out.go:235]   - Generating certificates and keys ...
	I0912 22:48:23.312681   50078 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 22:48:23.312759   50078 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 22:48:23.312859   50078 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:48:23.312954   50078 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:48:23.313042   50078 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:48:23.313127   50078 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 22:48:23.313210   50078 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 22:48:23.313387   50078 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0912 22:48:23.313462   50078 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 22:48:23.313638   50078 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0912 22:48:23.313723   50078 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:48:23.313801   50078 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:48:23.313858   50078 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 22:48:23.313927   50078 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:48:23.313990   50078 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:48:23.314057   50078 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:48:23.314136   50078 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:48:23.314204   50078 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:48:23.314334   50078 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:48:23.314438   50078 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:48:23.314487   50078 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 22:48:23.314575   50078 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:48:23.751149   50078 out.go:235]   - Booting up control plane ...
	I0912 22:48:23.751290   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:48:23.751412   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:48:23.751549   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:48:23.751668   50078 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:48:23.751885   50078 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:48:23.751957   50078 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 22:48:23.752035   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:48:23.752291   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:48:23.752381   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:48:23.752663   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:48:23.752788   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:48:23.753056   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:48:23.753171   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:48:23.753416   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:48:23.753511   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:48:23.753776   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:48:23.753787   50078 kubeadm.go:310] 
	I0912 22:48:23.753858   50078 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 22:48:23.753916   50078 kubeadm.go:310] 		timed out waiting for the condition
	I0912 22:48:23.753926   50078 kubeadm.go:310] 
	I0912 22:48:23.753974   50078 kubeadm.go:310] 	This error is likely caused by:
	I0912 22:48:23.754021   50078 kubeadm.go:310] 		- The kubelet is not running
	I0912 22:48:23.754138   50078 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 22:48:23.754154   50078 kubeadm.go:310] 
	I0912 22:48:23.754268   50078 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 22:48:23.754319   50078 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 22:48:23.754369   50078 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 22:48:23.754378   50078 kubeadm.go:310] 
	I0912 22:48:23.754497   50078 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 22:48:23.754597   50078 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 22:48:23.754606   50078 kubeadm.go:310] 
	I0912 22:48:23.754719   50078 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 22:48:23.754825   50078 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 22:48:23.754920   50078 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 22:48:23.755013   50078 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0912 22:48:23.755175   50078 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-848420 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 22:48:23.755224   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 22:48:23.755484   50078 kubeadm.go:310] 
	I0912 22:48:24.248107   50078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:48:24.262279   50078 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:48:24.271959   50078 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:48:24.271989   50078 kubeadm.go:157] found existing configuration files:
	
	I0912 22:48:24.272037   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:48:24.281822   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 22:48:24.281898   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 22:48:24.291548   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:48:24.300452   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 22:48:24.300520   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 22:48:24.312937   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:48:24.325456   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 22:48:24.325526   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:48:24.337055   50078 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:48:24.347333   50078 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 22:48:24.347403   50078 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:48:24.360173   50078 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 22:48:24.455838   50078 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 22:48:24.455937   50078 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 22:48:24.627657   50078 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:48:24.627830   50078 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:48:24.627973   50078 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:48:24.824327   50078 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:48:25.007011   50078 out.go:235]   - Generating certificates and keys ...
	I0912 22:48:25.007198   50078 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 22:48:25.007391   50078 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 22:48:25.007518   50078 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 22:48:25.007608   50078 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 22:48:25.007714   50078 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 22:48:25.007792   50078 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 22:48:25.007891   50078 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 22:48:25.007989   50078 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 22:48:25.008089   50078 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 22:48:25.008206   50078 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 22:48:25.008267   50078 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 22:48:25.008342   50078 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:48:25.008414   50078 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:48:25.087844   50078 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:48:25.369658   50078 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:48:25.534977   50078 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:48:25.557550   50078 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:48:25.557753   50078 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:48:25.557882   50078 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 22:48:25.722354   50078 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:48:25.937915   50078 out.go:235]   - Booting up control plane ...
	I0912 22:48:25.938080   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:48:25.938192   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:48:25.938296   50078 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:48:25.938445   50078 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:48:25.938712   50078 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:49:05.737880   50078 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 22:49:05.738082   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:49:05.738311   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:49:10.738933   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:49:10.739234   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:49:20.739413   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:49:20.739659   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:49:40.738635   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:49:40.738803   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:50:20.738421   50078 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:50:20.738880   50078 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:50:20.738922   50078 kubeadm.go:310] 
	I0912 22:50:20.739014   50078 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 22:50:20.739098   50078 kubeadm.go:310] 		timed out waiting for the condition
	I0912 22:50:20.739121   50078 kubeadm.go:310] 
	I0912 22:50:20.739167   50078 kubeadm.go:310] 	This error is likely caused by:
	I0912 22:50:20.739212   50078 kubeadm.go:310] 		- The kubelet is not running
	I0912 22:50:20.739345   50078 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 22:50:20.739357   50078 kubeadm.go:310] 
	I0912 22:50:20.739491   50078 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 22:50:20.739540   50078 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 22:50:20.739587   50078 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 22:50:20.739618   50078 kubeadm.go:310] 
	I0912 22:50:20.739752   50078 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 22:50:20.739855   50078 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 22:50:20.739865   50078 kubeadm.go:310] 
	I0912 22:50:20.740005   50078 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 22:50:20.740132   50078 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 22:50:20.740243   50078 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 22:50:20.740335   50078 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 22:50:20.740342   50078 kubeadm.go:310] 
	I0912 22:50:20.741142   50078 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:50:20.741271   50078 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 22:50:20.741367   50078 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 22:50:20.741434   50078 kubeadm.go:394] duration metric: took 3m55.594176497s to StartCluster
	I0912 22:50:20.741482   50078 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:50:20.741690   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:50:20.792836   50078 cri.go:89] found id: ""
	I0912 22:50:20.792862   50078 logs.go:276] 0 containers: []
	W0912 22:50:20.792871   50078 logs.go:278] No container was found matching "kube-apiserver"
	I0912 22:50:20.792880   50078 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:50:20.792941   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:50:20.846036   50078 cri.go:89] found id: ""
	I0912 22:50:20.846063   50078 logs.go:276] 0 containers: []
	W0912 22:50:20.846073   50078 logs.go:278] No container was found matching "etcd"
	I0912 22:50:20.846081   50078 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:50:20.846138   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:50:20.894047   50078 cri.go:89] found id: ""
	I0912 22:50:20.894071   50078 logs.go:276] 0 containers: []
	W0912 22:50:20.894081   50078 logs.go:278] No container was found matching "coredns"
	I0912 22:50:20.894087   50078 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:50:20.894145   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:50:20.936733   50078 cri.go:89] found id: ""
	I0912 22:50:20.936759   50078 logs.go:276] 0 containers: []
	W0912 22:50:20.936766   50078 logs.go:278] No container was found matching "kube-scheduler"
	I0912 22:50:20.936772   50078 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:50:20.936830   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:50:20.981078   50078 cri.go:89] found id: ""
	I0912 22:50:20.981105   50078 logs.go:276] 0 containers: []
	W0912 22:50:20.981115   50078 logs.go:278] No container was found matching "kube-proxy"
	I0912 22:50:20.981123   50078 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:50:20.981177   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:50:21.017501   50078 cri.go:89] found id: ""
	I0912 22:50:21.017525   50078 logs.go:276] 0 containers: []
	W0912 22:50:21.017532   50078 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 22:50:21.017538   50078 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:50:21.017578   50078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:50:21.051547   50078 cri.go:89] found id: ""
	I0912 22:50:21.051579   50078 logs.go:276] 0 containers: []
	W0912 22:50:21.051587   50078 logs.go:278] No container was found matching "kindnet"
	I0912 22:50:21.051597   50078 logs.go:123] Gathering logs for dmesg ...
	I0912 22:50:21.051616   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:50:21.065589   50078 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:50:21.065644   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:50:21.177243   50078 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:50:21.177275   50078 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:50:21.177297   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:50:21.304104   50078 logs.go:123] Gathering logs for container status ...
	I0912 22:50:21.304140   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:50:21.367606   50078 logs.go:123] Gathering logs for kubelet ...
	I0912 22:50:21.367646   50078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 22:50:21.425707   50078 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 22:50:21.425780   50078 out.go:270] * 
	* 
	W0912 22:50:21.425850   50078 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 22:50:21.425867   50078 out.go:270] * 
	* 
	W0912 22:50:21.426763   50078 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:50:21.430561   50078 out.go:201] 
	W0912 22:50:21.432093   50078 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 22:50:21.432166   50078 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 22:50:21.432199   50078 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 22:50:21.433677   50078 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-848420
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-848420: (1.492939014s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-848420 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-848420 status --format={{.Host}}: exit status 7 (76.477139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.823657327s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-848420 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (121.827046ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-848420] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-848420
	    minikube start -p kubernetes-upgrade-848420 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8484202 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-848420 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-848420 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.81672717s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-12 22:52:32.902187353 +0000 UTC m=+5019.750570282
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-848420 -n kubernetes-upgrade-848420
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-848420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-848420 logs -n 25: (1.573207028s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-938961 sudo crio            | cilium-938961                | jenkins | v1.34.0 | 12 Sep 24 22:49 UTC |                     |
	|         | config                                |                              |         |         |                     |                     |
	| delete  | -p cilium-938961                      | cilium-938961                | jenkins | v1.34.0 | 12 Sep 24 22:49 UTC | 12 Sep 24 22:49 UTC |
	| start   | -p force-systemd-env-633513           | force-systemd-env-633513     | jenkins | v1.34.0 | 12 Sep 24 22:49 UTC | 12 Sep 24 22:50 UTC |
	|         | --memory=2048                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                     |                              |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| start   | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:49 UTC | 12 Sep 24 22:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| ssh     | force-systemd-flag-042278 ssh cat     | force-systemd-flag-042278    | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:50 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-042278          | force-systemd-flag-042278    | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:50 UTC |
	| start   | -p cert-expiration-408779             | cert-expiration-408779       | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:51 UTC |
	|         | --memory=2048                         |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:50 UTC |
	| stop    | -p kubernetes-upgrade-848420          | kubernetes-upgrade-848420    | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:50 UTC |
	| start   | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:51 UTC |
	|         | --no-kubernetes --driver=kvm2         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-848420          | kubernetes-upgrade-848420    | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:51 UTC |
	|         | --memory=2200                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                              |         |         |                     |                     |
	|         | --alsologtostderr                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-633513           | force-systemd-env-633513     | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:50 UTC |
	| start   | -p cert-options-689966                | cert-options-689966          | jenkins | v1.34.0 | 12 Sep 24 22:50 UTC | 12 Sep 24 22:52 UTC |
	|         | --memory=2048                         |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                              |         |         |                     |                     |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-204793 sudo           | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:51 UTC |                     |
	|         | systemctl is-active --quiet           |                              |         |         |                     |                     |
	|         | service kubelet                       |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:51 UTC | 12 Sep 24 22:51 UTC |
	| start   | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:51 UTC | 12 Sep 24 22:52 UTC |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-848420          | kubernetes-upgrade-848420    | jenkins | v1.34.0 | 12 Sep 24 22:51 UTC |                     |
	|         | --memory=2200                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                              |         |         |                     |                     |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-848420          | kubernetes-upgrade-848420    | jenkins | v1.34.0 | 12 Sep 24 22:51 UTC | 12 Sep 24 22:52 UTC |
	|         | --memory=2200                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                              |         |         |                     |                     |
	|         | --alsologtostderr                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	| ssh     | cert-options-689966 ssh               | cert-options-689966          | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC | 12 Sep 24 22:52 UTC |
	|         | openssl x509 -text -noout -in         |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                              |         |         |                     |                     |
	| ssh     | -p cert-options-689966 -- sudo        | cert-options-689966          | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC | 12 Sep 24 22:52 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                              |         |         |                     |                     |
	| delete  | -p cert-options-689966                | cert-options-689966          | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC | 12 Sep 24 22:52 UTC |
	| start   | -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC |                     |
	|         | --memory=2200                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                              |         |         |                     |                     |
	|         | --kvm-network=default                 |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                              |         |         |                     |                     |
	|         | --disable-driver-mounts               |                              |         |         |                     |                     |
	|         | --keep-context=false                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-204793 sudo           | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC |                     |
	|         | systemctl is-active --quiet           |                              |         |         |                     |                     |
	|         | service kubelet                       |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-204793                | NoKubernetes-204793          | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC | 12 Sep 24 22:52 UTC |
	| start   | -p                                    | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:52 UTC |                     |
	|         | default-k8s-diff-port-702201          |                              |         |         |                     |                     |
	|         | --memory=2200                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                 |                              |         |         |                     |                     |
	|         | --driver=kvm2                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                              |         |         |                     |                     |
	|---------|---------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:52:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:52:13.153749   58070 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:52:13.153985   58070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:13.153993   58070 out.go:358] Setting ErrFile to fd 2...
	I0912 22:52:13.153997   58070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:13.154184   58070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:52:13.154757   58070 out.go:352] Setting JSON to false
	I0912 22:52:13.155705   58070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5675,"bootTime":1726175858,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:52:13.155766   58070 start.go:139] virtualization: kvm guest
	I0912 22:52:13.158004   58070 out.go:177] * [default-k8s-diff-port-702201] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:52:13.159286   58070 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:52:13.159335   58070 notify.go:220] Checking for updates...
	I0912 22:52:13.161806   58070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:52:13.163010   58070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:52:13.164099   58070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:13.165171   58070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:52:13.166367   58070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:52:13.168031   58070 config.go:182] Loaded profile config "cert-expiration-408779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:52:13.168159   58070 config.go:182] Loaded profile config "kubernetes-upgrade-848420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:52:13.168253   58070 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 22:52:13.168349   58070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:52:13.204800   58070 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 22:52:13.205947   58070 start.go:297] selected driver: kvm2
	I0912 22:52:13.205975   58070 start.go:901] validating driver "kvm2" against <nil>
	I0912 22:52:13.205985   58070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:52:13.206649   58070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:52:13.206728   58070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:52:13.222683   58070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:52:13.222745   58070 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:52:13.222954   58070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:52:13.222983   58070 cni.go:84] Creating CNI manager for ""
	I0912 22:52:13.222991   58070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:52:13.222997   58070 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 22:52:13.223042   58070 start.go:340] cluster config:
	{Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:52:13.223154   58070 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:52:13.225053   58070 out.go:177] * Starting "default-k8s-diff-port-702201" primary control-plane node in "default-k8s-diff-port-702201" cluster
	I0912 22:52:10.289344   57590 machine.go:93] provisionDockerMachine start ...
	I0912 22:52:10.289362   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:10.289573   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:10.292352   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.292843   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:10.292866   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.293060   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:10.293243   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.293410   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.293628   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:10.293806   57590 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:10.294007   57590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:52:10.294023   57590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:52:10.397702   57590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-848420
	
	I0912 22:52:10.397742   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:52:10.398037   57590 buildroot.go:166] provisioning hostname "kubernetes-upgrade-848420"
	I0912 22:52:10.398068   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:52:10.398281   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:10.401299   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.401609   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:10.401650   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.401863   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:10.402040   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.402215   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.402339   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:10.402503   57590 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:10.402736   57590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:52:10.402757   57590 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-848420 && echo "kubernetes-upgrade-848420" | sudo tee /etc/hostname
	I0912 22:52:10.520258   57590 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-848420
	
	I0912 22:52:10.520286   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:10.523296   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.523669   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:10.523699   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.523870   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:10.524035   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.524181   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:10.524336   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:10.524504   57590 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:10.524714   57590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:52:10.524741   57590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-848420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-848420/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-848420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:52:10.626084   57590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:52:10.626114   57590 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:52:10.626135   57590 buildroot.go:174] setting up certificates
	I0912 22:52:10.626147   57590 provision.go:84] configureAuth start
	I0912 22:52:10.626159   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetMachineName
	I0912 22:52:10.626424   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:52:10.628776   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.629176   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:10.629202   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.629336   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:10.631737   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.632069   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:10.632093   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:10.632249   57590 provision.go:143] copyHostCerts
	I0912 22:52:10.632301   57590 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:52:10.632323   57590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:52:10.632390   57590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:52:10.632508   57590 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:52:10.632521   57590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:52:10.632562   57590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:52:10.632630   57590 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:52:10.632648   57590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:52:10.632673   57590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:52:10.632721   57590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-848420 san=[127.0.0.1 192.168.39.110 kubernetes-upgrade-848420 localhost minikube]
	I0912 22:52:11.129568   57590 provision.go:177] copyRemoteCerts
	I0912 22:52:11.129666   57590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:52:11.129694   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:11.132906   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:11.133330   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:11.133362   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:11.133726   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:11.133899   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:11.134016   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:11.134186   57590 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:52:11.221636   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:52:11.247256   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0912 22:52:11.275137   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 22:52:11.311538   57590 provision.go:87] duration metric: took 685.379573ms to configureAuth
	I0912 22:52:11.311580   57590 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:52:11.311720   57590 config.go:182] Loaded profile config "kubernetes-upgrade-848420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:52:11.311783   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:11.314341   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:11.314867   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:11.314890   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:11.315070   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:11.315261   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:11.315403   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:11.315573   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:11.315741   57590 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:11.315940   57590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:52:11.315962   57590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:52:17.678344   57925 start.go:364] duration metric: took 11.309381465s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 22:52:17.678417   57925 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:52:17.678539   57925 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 22:52:13.226434   58070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:52:13.226468   58070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:52:13.226476   58070 cache.go:56] Caching tarball of preloaded images
	I0912 22:52:13.226554   58070 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:52:13.226564   58070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 22:52:13.226647   58070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 22:52:13.226663   58070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json: {Name:mk02c2a9fb160a6f7d634e30a068e479fa4ad398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:13.226811   58070 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:52:17.446061   57590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:52:17.446091   57590 machine.go:96] duration metric: took 7.156733684s to provisionDockerMachine
	I0912 22:52:17.446104   57590 start.go:293] postStartSetup for "kubernetes-upgrade-848420" (driver="kvm2")
	I0912 22:52:17.446118   57590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:52:17.446138   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:17.446440   57590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:52:17.446469   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:17.449741   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.450153   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:17.450178   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.450397   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:17.450602   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:17.450762   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:17.450892   57590 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:52:17.532132   57590 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:52:17.536863   57590 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:52:17.536890   57590 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:52:17.536956   57590 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:52:17.537041   57590 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:52:17.537132   57590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:52:17.546950   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:52:17.572598   57590 start.go:296] duration metric: took 126.478572ms for postStartSetup
	I0912 22:52:17.572641   57590 fix.go:56] duration metric: took 7.306302943s for fixHost
	I0912 22:52:17.572664   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:17.575517   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.575864   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:17.575888   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.576073   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:17.576296   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:17.576471   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:17.576632   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:17.576843   57590 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:17.577039   57590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0912 22:52:17.577052   57590 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:52:17.678141   57590 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726181537.669062556
	
	I0912 22:52:17.678162   57590 fix.go:216] guest clock: 1726181537.669062556
	I0912 22:52:17.678172   57590 fix.go:229] Guest: 2024-09-12 22:52:17.669062556 +0000 UTC Remote: 2024-09-12 22:52:17.572645741 +0000 UTC m=+27.483046553 (delta=96.416815ms)
	I0912 22:52:17.678221   57590 fix.go:200] guest clock delta is within tolerance: 96.416815ms
	I0912 22:52:17.678231   57590 start.go:83] releasing machines lock for "kubernetes-upgrade-848420", held for 7.411926781s
	I0912 22:52:17.678256   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:17.678607   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:52:17.682682   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.683149   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:17.683180   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.683509   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:17.684879   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:17.685249   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .DriverName
	I0912 22:52:17.685375   57590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:52:17.685450   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:17.685695   57590 ssh_runner.go:195] Run: cat /version.json
	I0912 22:52:17.685790   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHHostname
	I0912 22:52:17.690084   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.690550   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.690895   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:17.690938   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.691265   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:17.691318   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:17.691346   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:17.691600   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:17.691630   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHPort
	I0912 22:52:17.691875   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:17.691879   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHKeyPath
	I0912 22:52:17.692122   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetSSHUsername
	I0912 22:52:17.692115   57590 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:52:17.692268   57590 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kubernetes-upgrade-848420/id_rsa Username:docker}
	I0912 22:52:17.802119   57590 ssh_runner.go:195] Run: systemctl --version
	I0912 22:52:17.809175   57590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:52:17.966928   57590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 22:52:17.973177   57590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:52:17.973257   57590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:52:17.983455   57590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:52:17.983487   57590 start.go:495] detecting cgroup driver to use...
	I0912 22:52:17.983568   57590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:52:18.001747   57590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:52:18.016481   57590 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:52:18.016550   57590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:52:18.031773   57590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:52:18.046831   57590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:52:18.195188   57590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:52:18.345902   57590 docker.go:233] disabling docker service ...
	I0912 22:52:18.346036   57590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:52:18.362602   57590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:52:18.377781   57590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:52:18.530808   57590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:52:18.677218   57590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:52:18.692030   57590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:52:18.713542   57590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 22:52:18.713641   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.723862   57590 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:52:18.723934   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.734019   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.744507   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.757509   57590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:52:18.767983   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.778203   57590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.790823   57590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:18.801233   57590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:52:18.810408   57590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:52:18.819492   57590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:52:18.966148   57590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:52:17.680621   57925 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 22:52:17.680857   57925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:52:17.680940   57925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:52:17.704989   57925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0912 22:52:17.705558   57925 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:52:17.706413   57925 main.go:141] libmachine: Using API Version  1
	I0912 22:52:17.706435   57925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:52:17.706853   57925 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:52:17.707247   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 22:52:17.707528   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:17.707828   57925 start.go:159] libmachine.API.Create for "old-k8s-version-642238" (driver="kvm2")
	I0912 22:52:17.707869   57925 client.go:168] LocalClient.Create starting
	I0912 22:52:17.707907   57925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 22:52:17.707964   57925 main.go:141] libmachine: Decoding PEM data...
	I0912 22:52:17.707993   57925 main.go:141] libmachine: Parsing certificate...
	I0912 22:52:17.708070   57925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 22:52:17.708104   57925 main.go:141] libmachine: Decoding PEM data...
	I0912 22:52:17.708125   57925 main.go:141] libmachine: Parsing certificate...
	I0912 22:52:17.708151   57925 main.go:141] libmachine: Running pre-create checks...
	I0912 22:52:17.708163   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .PreCreateCheck
	I0912 22:52:17.708667   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 22:52:17.709402   57925 main.go:141] libmachine: Creating machine...
	I0912 22:52:17.709422   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .Create
	I0912 22:52:17.709632   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating KVM machine...
	I0912 22:52:17.711251   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found existing default KVM network
	I0912 22:52:17.712567   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.712362   58109 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0b:ee:fc} reservation:<nil>}
	I0912 22:52:17.713490   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.713383   58109 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:00:cf} reservation:<nil>}
	I0912 22:52:17.714677   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.714576   58109 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dc400}
	I0912 22:52:17.714705   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | created network xml: 
	I0912 22:52:17.714719   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | <network>
	I0912 22:52:17.714728   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <name>mk-old-k8s-version-642238</name>
	I0912 22:52:17.714746   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <dns enable='no'/>
	I0912 22:52:17.714753   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   
	I0912 22:52:17.714763   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0912 22:52:17.714771   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |     <dhcp>
	I0912 22:52:17.714800   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0912 22:52:17.714822   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |     </dhcp>
	I0912 22:52:17.714835   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   </ip>
	I0912 22:52:17.714841   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   
	I0912 22:52:17.714850   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | </network>
	I0912 22:52:17.714860   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | 
	I0912 22:52:17.721969   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | trying to create private KVM network mk-old-k8s-version-642238 192.168.61.0/24...
	I0912 22:52:17.803623   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | private KVM network mk-old-k8s-version-642238 192.168.61.0/24 created
	I0912 22:52:17.803661   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 ...
	I0912 22:52:17.803680   57925 main.go:141] libmachine: (old-k8s-version-642238) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 22:52:17.803694   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.803634   58109 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:17.803856   57925 main.go:141] libmachine: (old-k8s-version-642238) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 22:52:18.049345   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.049213   58109 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa...
	I0912 22:52:18.168144   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.167982   58109 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/old-k8s-version-642238.rawdisk...
	I0912 22:52:18.168214   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Writing magic tar header
	I0912 22:52:18.168232   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 (perms=drwx------)
	I0912 22:52:18.168241   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Writing SSH key tar header
	I0912 22:52:18.168258   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.168107   58109 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 ...
	I0912 22:52:18.168271   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238
	I0912 22:52:18.168285   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 22:52:18.168302   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:18.168334   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 22:52:18.168352   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 22:52:18.168366   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 22:52:18.168379   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 22:52:18.168392   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 22:52:18.168400   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 22:52:18.168413   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 22:52:18.168420   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins
	I0912 22:52:18.168429   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home
	I0912 22:52:18.168437   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Skipping /home - not owner
	I0912 22:52:18.168452   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 22:52:18.169653   57925 main.go:141] libmachine: (old-k8s-version-642238) define libvirt domain using xml: 
	I0912 22:52:18.169703   57925 main.go:141] libmachine: (old-k8s-version-642238) <domain type='kvm'>
	I0912 22:52:18.169718   57925 main.go:141] libmachine: (old-k8s-version-642238)   <name>old-k8s-version-642238</name>
	I0912 22:52:18.169730   57925 main.go:141] libmachine: (old-k8s-version-642238)   <memory unit='MiB'>2200</memory>
	I0912 22:52:18.169742   57925 main.go:141] libmachine: (old-k8s-version-642238)   <vcpu>2</vcpu>
	I0912 22:52:18.169755   57925 main.go:141] libmachine: (old-k8s-version-642238)   <features>
	I0912 22:52:18.169791   57925 main.go:141] libmachine: (old-k8s-version-642238)     <acpi/>
	I0912 22:52:18.169813   57925 main.go:141] libmachine: (old-k8s-version-642238)     <apic/>
	I0912 22:52:18.169836   57925 main.go:141] libmachine: (old-k8s-version-642238)     <pae/>
	I0912 22:52:18.169848   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.169876   57925 main.go:141] libmachine: (old-k8s-version-642238)   </features>
	I0912 22:52:18.169889   57925 main.go:141] libmachine: (old-k8s-version-642238)   <cpu mode='host-passthrough'>
	I0912 22:52:18.169901   57925 main.go:141] libmachine: (old-k8s-version-642238)   
	I0912 22:52:18.169909   57925 main.go:141] libmachine: (old-k8s-version-642238)   </cpu>
	I0912 22:52:18.169919   57925 main.go:141] libmachine: (old-k8s-version-642238)   <os>
	I0912 22:52:18.169936   57925 main.go:141] libmachine: (old-k8s-version-642238)     <type>hvm</type>
	I0912 22:52:18.169949   57925 main.go:141] libmachine: (old-k8s-version-642238)     <boot dev='cdrom'/>
	I0912 22:52:18.169960   57925 main.go:141] libmachine: (old-k8s-version-642238)     <boot dev='hd'/>
	I0912 22:52:18.169971   57925 main.go:141] libmachine: (old-k8s-version-642238)     <bootmenu enable='no'/>
	I0912 22:52:18.169981   57925 main.go:141] libmachine: (old-k8s-version-642238)   </os>
	I0912 22:52:18.169991   57925 main.go:141] libmachine: (old-k8s-version-642238)   <devices>
	I0912 22:52:18.170018   57925 main.go:141] libmachine: (old-k8s-version-642238)     <disk type='file' device='cdrom'>
	I0912 22:52:18.170035   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/boot2docker.iso'/>
	I0912 22:52:18.170055   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target dev='hdc' bus='scsi'/>
	I0912 22:52:18.170071   57925 main.go:141] libmachine: (old-k8s-version-642238)       <readonly/>
	I0912 22:52:18.170089   57925 main.go:141] libmachine: (old-k8s-version-642238)     </disk>
	I0912 22:52:18.170099   57925 main.go:141] libmachine: (old-k8s-version-642238)     <disk type='file' device='disk'>
	I0912 22:52:18.170126   57925 main.go:141] libmachine: (old-k8s-version-642238)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 22:52:18.170150   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/old-k8s-version-642238.rawdisk'/>
	I0912 22:52:18.170164   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target dev='hda' bus='virtio'/>
	I0912 22:52:18.170175   57925 main.go:141] libmachine: (old-k8s-version-642238)     </disk>
	I0912 22:52:18.170189   57925 main.go:141] libmachine: (old-k8s-version-642238)     <interface type='network'>
	I0912 22:52:18.170201   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source network='mk-old-k8s-version-642238'/>
	I0912 22:52:18.170215   57925 main.go:141] libmachine: (old-k8s-version-642238)       <model type='virtio'/>
	I0912 22:52:18.170226   57925 main.go:141] libmachine: (old-k8s-version-642238)     </interface>
	I0912 22:52:18.170240   57925 main.go:141] libmachine: (old-k8s-version-642238)     <interface type='network'>
	I0912 22:52:18.170251   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source network='default'/>
	I0912 22:52:18.170263   57925 main.go:141] libmachine: (old-k8s-version-642238)       <model type='virtio'/>
	I0912 22:52:18.170274   57925 main.go:141] libmachine: (old-k8s-version-642238)     </interface>
	I0912 22:52:18.170285   57925 main.go:141] libmachine: (old-k8s-version-642238)     <serial type='pty'>
	I0912 22:52:18.170294   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target port='0'/>
	I0912 22:52:18.170300   57925 main.go:141] libmachine: (old-k8s-version-642238)     </serial>
	I0912 22:52:18.170318   57925 main.go:141] libmachine: (old-k8s-version-642238)     <console type='pty'>
	I0912 22:52:18.170330   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target type='serial' port='0'/>
	I0912 22:52:18.170369   57925 main.go:141] libmachine: (old-k8s-version-642238)     </console>
	I0912 22:52:18.170387   57925 main.go:141] libmachine: (old-k8s-version-642238)     <rng model='virtio'>
	I0912 22:52:18.170398   57925 main.go:141] libmachine: (old-k8s-version-642238)       <backend model='random'>/dev/random</backend>
	I0912 22:52:18.170409   57925 main.go:141] libmachine: (old-k8s-version-642238)     </rng>
	I0912 22:52:18.170421   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.170434   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.170447   57925 main.go:141] libmachine: (old-k8s-version-642238)   </devices>
	I0912 22:52:18.170457   57925 main.go:141] libmachine: (old-k8s-version-642238) </domain>
	I0912 22:52:18.170471   57925 main.go:141] libmachine: (old-k8s-version-642238) 
	I0912 22:52:18.174858   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:53:a7:4a in network default
	I0912 22:52:18.175729   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 22:52:18.175751   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:18.176532   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 22:52:18.176945   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 22:52:18.177643   57925 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 22:52:18.178456   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 22:52:19.473797   57925 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 22:52:19.474480   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:19.474872   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:19.474892   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:19.474863   58109 retry.go:31] will retry after 286.330243ms: waiting for machine to come up
	I0912 22:52:19.762384   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:19.763004   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:19.763033   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:19.762958   58109 retry.go:31] will retry after 248.581952ms: waiting for machine to come up
	I0912 22:52:20.013442   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.013919   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.013943   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.013874   58109 retry.go:31] will retry after 423.642047ms: waiting for machine to come up
	I0912 22:52:20.439556   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.440066   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.440095   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.439978   58109 retry.go:31] will retry after 424.276527ms: waiting for machine to come up
	I0912 22:52:20.865441   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.865947   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.865981   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.865899   58109 retry.go:31] will retry after 731.460727ms: waiting for machine to come up
	I0912 22:52:22.800002   57590 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.833816386s)
	I0912 22:52:22.800033   57590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:52:22.800076   57590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:52:22.804714   57590 start.go:563] Will wait 60s for crictl version
	I0912 22:52:22.804769   57590 ssh_runner.go:195] Run: which crictl
	I0912 22:52:22.808724   57590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:52:22.844024   57590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:52:22.844103   57590 ssh_runner.go:195] Run: crio --version
	I0912 22:52:22.873660   57590 ssh_runner.go:195] Run: crio --version
	I0912 22:52:22.914342   57590 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 22:52:22.915584   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) Calling .GetIP
	I0912 22:52:22.918845   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:22.919234   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:a1:6b", ip: ""} in network mk-kubernetes-upgrade-848420: {Iface:virbr1 ExpiryTime:2024-09-12 23:51:14 +0000 UTC Type:0 Mac:52:54:00:8c:a1:6b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:kubernetes-upgrade-848420 Clientid:01:52:54:00:8c:a1:6b}
	I0912 22:52:22.919262   57590 main.go:141] libmachine: (kubernetes-upgrade-848420) DBG | domain kubernetes-upgrade-848420 has defined IP address 192.168.39.110 and MAC address 52:54:00:8c:a1:6b in network mk-kubernetes-upgrade-848420
	I0912 22:52:22.919587   57590 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 22:52:22.923938   57590 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:52:22.924051   57590 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 22:52:22.924113   57590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:52:22.967293   57590 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:52:22.967334   57590 crio.go:433] Images already preloaded, skipping extraction
	I0912 22:52:22.967401   57590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:52:23.003532   57590 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 22:52:23.003556   57590 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:52:23.003564   57590 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I0912 22:52:23.003657   57590 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-848420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:52:23.003719   57590 ssh_runner.go:195] Run: crio config
	I0912 22:52:23.054309   57590 cni.go:84] Creating CNI manager for ""
	I0912 22:52:23.054337   57590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:52:23.054351   57590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:52:23.054372   57590 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-848420 NodeName:kubernetes-upgrade-848420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:52:23.054520   57590 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-848420"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:52:23.054580   57590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:52:23.065309   57590 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:52:23.065392   57590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:52:23.075801   57590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0912 22:52:23.093239   57590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:52:23.110540   57590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0912 22:52:23.126571   57590 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0912 22:52:23.130309   57590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:52:23.269490   57590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:52:23.282906   57590 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420 for IP: 192.168.39.110
	I0912 22:52:23.282931   57590 certs.go:194] generating shared ca certs ...
	I0912 22:52:23.282953   57590 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:23.283127   57590 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:52:23.283198   57590 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:52:23.283211   57590 certs.go:256] generating profile certs ...
	I0912 22:52:23.283324   57590 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/client.key
	I0912 22:52:23.283404   57590 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key.56f551ba
	I0912 22:52:23.283454   57590 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key
	I0912 22:52:23.283602   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:52:23.283654   57590 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:52:23.283667   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:52:23.283703   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:52:23.283734   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:52:23.283764   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:52:23.283819   57590 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:52:23.284609   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:52:23.310136   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:52:23.334003   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:52:23.358872   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:52:23.385758   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0912 22:52:23.414711   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:52:23.442286   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:52:23.466131   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kubernetes-upgrade-848420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 22:52:23.495065   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:52:23.521166   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:52:23.549955   57590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:52:23.576764   57590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:52:23.598207   57590 ssh_runner.go:195] Run: openssl version
	I0912 22:52:23.605745   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:52:23.618307   57590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:52:23.622820   57590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:52:23.622892   57590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:52:23.628954   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:52:23.638819   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:52:23.651190   57590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:52:23.655837   57590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:52:23.655899   57590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:52:23.661580   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:52:23.672148   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:52:23.685166   57590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:23.690030   57590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:23.690086   57590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:23.695884   57590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:52:23.706451   57590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:52:23.711194   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:52:23.717263   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:52:23.723381   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:52:23.729122   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:52:23.735047   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:52:23.741069   57590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:52:23.746626   57590 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-848420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-848420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:52:23.746722   57590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:52:23.746791   57590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:52:23.786702   57590 cri.go:89] found id: "14227a6604a60066486ffec4e25c57c4917e1629d20b25ac6257c75c763e7676"
	I0912 22:52:23.786725   57590 cri.go:89] found id: "bcc2de3020f3087d4fe759a3da05be8d513812925c77d693bc1094edbd099b81"
	I0912 22:52:23.786729   57590 cri.go:89] found id: "64ed18f0c67bb60d08430dd44e0b5f9adeb1bf24342a62203ce0036e152ecff0"
	I0912 22:52:23.786732   57590 cri.go:89] found id: "b2baa4067538d6d199fedd40def97917db42164cd5a730cdd1ed2222591a40a9"
	I0912 22:52:23.786735   57590 cri.go:89] found id: "28979b57b18ce2b18220bdda064b24bc7556a55714187d190d646e22009b9fc6"
	I0912 22:52:23.786738   57590 cri.go:89] found id: "50de479dde2151fb6746f94327c8ea7d74f2f1b69006bd7c4fe31936a724d9de"
	I0912 22:52:23.786741   57590 cri.go:89] found id: "9a619795fc2a28a5def7ca251d6f2ada108f49f4cebc4e243470fa468a67a209"
	I0912 22:52:23.786743   57590 cri.go:89] found id: "57fceda398d0f6969f50710641a0b86a27d686cc34969fea838cbceb79909105"
	I0912 22:52:23.786746   57590 cri.go:89] found id: ""
	I0912 22:52:23.786792   57590 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.693444849Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0522004b-e82d-4720-bcbe-68e311a5bb58 name=/runtime.v1.RuntimeService/Version
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.694547651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79e84942-de33-4ba1-a675-26b4aff111ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.694927463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726181553694904427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e84942-de33-4ba1-a675-26b4aff111ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.695450035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb4ea8fb-b53f-41ee-86d1-e5062f3b3277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.695542237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb4ea8fb-b53f-41ee-86d1-e5062f3b3277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.695910627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45657f99c3836615e125d5c0b158a147dd4b96f02dcd80337e12331f36b43ce6,PodSandboxId:59066a476575c56175158d93a54ab2619f9439647ed6f8ff2b6328be047b3066,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550774695639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a09699ee6f6644bab3f4c0a0d97a9a9b4b8a29a9dfe21cd99933fdd7266263,PodSandboxId:330d08f0808247d6b22c49cccfc11d20cde17923ebe2ed05387f205de256b7a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550712718377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee02f22960aa6fd3d39d7a79a25565b6a40a50b964afe7a0b3d848971402fb7,PodSandboxId:981ed4f2247a4ab5b6d4d8e1c4ceb2db1bd2b9c4dabf03866a85f393052ef2f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1726181550323010519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cc2fc1eb420d84fdb7155822fd06cd9f6418492f1b3f261a08a80897b74be,PodSandboxId:6a70e18a006eb3ae5cbe4111ead8b2c730a1407a17b89ca3255f43cc6613752d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6181550298367486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf95dd12eab6e8b6f2c9a7e68c3b421414f4ef7ed80414f9f3dbecc18eaff165,PodSandboxId:0a1978da085e425be847af356cf0e8505d8f91530772ddf2b5b9769a0175f331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726181546512092947,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594808221f8d13b0b5753056d0106688dca09793c39e53823b1379ce16dbe712,PodSandboxId:e9b39850366cdbb911af5fd20c8531eebdbab223c4cb6a0b85dfea9958264dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726181546463729995,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfa0db3b057e699276d54b2b3f5d44221f518346fbf2dca56dd8c98322f266d,PodSandboxId:2611b402a6004328244aa9c2c095daf6a3c05ee6c310e8c72ed03a2bf9b3b934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726181546426085165,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f95c6dbfc59087141c6ef533959a9f0f9366f3a828779d409be5c2f6186dc6,PodSandboxId:9de2f677d172b1f8b3e72a5b13081592bae132145b88a91bc1d62a8f75234125,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726181546398034078,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14227a6604a60066486ffec4e25c57c4917e1629d20b25ac6257c75c763e7676,PodSandboxId:89baac321618621861fa5bc9a5b1e86aed0088cd574aa6e77e40ab8bf601cb87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726181510230616474,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc2de3020f3087d4fe759a3da05be8d513812925c77d693bc1094edbd099b81,PodSandboxId:c6793a68644a9ec24011a83db384f8e91e3510f0213ccfce3448003fb88b85d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726181509754134185,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ed18f0c67bb60d08430dd44e0b5f9adeb1bf24342a62203ce0036e152ecff0,PodSandboxId:eb683af7b5809f9482ef2afcb5ed3a69ddc54776b2fa4d704e1e65a2d3998472,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509201974056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2baa4067538d6d199fedd40def97917db42164cd5a730cdd1ed2222591a40a9,PodSandboxId:1acc362c60e7da39c6d51a6c7a01f4a7c95d1e828b2109ab7e01035b07f77e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509110762410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28979b57b18ce2b18220bdda064b24bc7556a55714187d190d646e22009b9fc6,PodSandboxId:cd13546beedac2e156df7201c16f7fa94f299148a8abf174189aa99cec6c62f2,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726181494588185196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50de479dde2151fb6746f94327c8ea7d74f2f1b69006bd7c4fe31936a724d9de,PodSandboxId:2dede6246aff5efc09e6ef9bae58da8808e49a59164b42cca4ff1c800538687f,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726181494570896787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a619795fc2a28a5def7ca251d6f2ada108f49f4cebc4e243470fa468a67a209,PodSandboxId:a79cbe904a0612a0519190b7e1c26b0ed84658eaa0eeed0a2d395f0422bd0c60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726181494540061054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fceda398d0f6969f50710641a0b86a27d686cc34969fea838cbceb79909105,PodSandboxId:caad545b19c92b0e468a5d816dce32b675685a78cedab78700e55b51dadfab81,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726181494460266179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb4ea8fb-b53f-41ee-86d1-e5062f3b3277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.741713529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa2b3c6e-c2d0-437e-8a3f-98dc53e5980b name=/runtime.v1.RuntimeService/Version
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.741800956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa2b3c6e-c2d0-437e-8a3f-98dc53e5980b name=/runtime.v1.RuntimeService/Version
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.742996661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98e0a6cc-6641-43c2-89fc-b651f640e69a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.743380384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726181553743357576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98e0a6cc-6641-43c2-89fc-b651f640e69a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.743994869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bed2bf6-b8e7-4289-a9c9-50a5a625f211 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.744192730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bed2bf6-b8e7-4289-a9c9-50a5a625f211 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.744650547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45657f99c3836615e125d5c0b158a147dd4b96f02dcd80337e12331f36b43ce6,PodSandboxId:59066a476575c56175158d93a54ab2619f9439647ed6f8ff2b6328be047b3066,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550774695639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a09699ee6f6644bab3f4c0a0d97a9a9b4b8a29a9dfe21cd99933fdd7266263,PodSandboxId:330d08f0808247d6b22c49cccfc11d20cde17923ebe2ed05387f205de256b7a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550712718377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee02f22960aa6fd3d39d7a79a25565b6a40a50b964afe7a0b3d848971402fb7,PodSandboxId:981ed4f2247a4ab5b6d4d8e1c4ceb2db1bd2b9c4dabf03866a85f393052ef2f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1726181550323010519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cc2fc1eb420d84fdb7155822fd06cd9f6418492f1b3f261a08a80897b74be,PodSandboxId:6a70e18a006eb3ae5cbe4111ead8b2c730a1407a17b89ca3255f43cc6613752d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6181550298367486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf95dd12eab6e8b6f2c9a7e68c3b421414f4ef7ed80414f9f3dbecc18eaff165,PodSandboxId:0a1978da085e425be847af356cf0e8505d8f91530772ddf2b5b9769a0175f331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726181546512092947,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594808221f8d13b0b5753056d0106688dca09793c39e53823b1379ce16dbe712,PodSandboxId:e9b39850366cdbb911af5fd20c8531eebdbab223c4cb6a0b85dfea9958264dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726181546463729995,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfa0db3b057e699276d54b2b3f5d44221f518346fbf2dca56dd8c98322f266d,PodSandboxId:2611b402a6004328244aa9c2c095daf6a3c05ee6c310e8c72ed03a2bf9b3b934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726181546426085165,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f95c6dbfc59087141c6ef533959a9f0f9366f3a828779d409be5c2f6186dc6,PodSandboxId:9de2f677d172b1f8b3e72a5b13081592bae132145b88a91bc1d62a8f75234125,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726181546398034078,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14227a6604a60066486ffec4e25c57c4917e1629d20b25ac6257c75c763e7676,PodSandboxId:89baac321618621861fa5bc9a5b1e86aed0088cd574aa6e77e40ab8bf601cb87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726181510230616474,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc2de3020f3087d4fe759a3da05be8d513812925c77d693bc1094edbd099b81,PodSandboxId:c6793a68644a9ec24011a83db384f8e91e3510f0213ccfce3448003fb88b85d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726181509754134185,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ed18f0c67bb60d08430dd44e0b5f9adeb1bf24342a62203ce0036e152ecff0,PodSandboxId:eb683af7b5809f9482ef2afcb5ed3a69ddc54776b2fa4d704e1e65a2d3998472,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509201974056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2baa4067538d6d199fedd40def97917db42164cd5a730cdd1ed2222591a40a9,PodSandboxId:1acc362c60e7da39c6d51a6c7a01f4a7c95d1e828b2109ab7e01035b07f77e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509110762410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28979b57b18ce2b18220bdda064b24bc7556a55714187d190d646e22009b9fc6,PodSandboxId:cd13546beedac2e156df7201c16f7fa94f299148a8abf174189aa99cec6c62f2,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726181494588185196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50de479dde2151fb6746f94327c8ea7d74f2f1b69006bd7c4fe31936a724d9de,PodSandboxId:2dede6246aff5efc09e6ef9bae58da8808e49a59164b42cca4ff1c800538687f,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726181494570896787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a619795fc2a28a5def7ca251d6f2ada108f49f4cebc4e243470fa468a67a209,PodSandboxId:a79cbe904a0612a0519190b7e1c26b0ed84658eaa0eeed0a2d395f0422bd0c60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726181494540061054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fceda398d0f6969f50710641a0b86a27d686cc34969fea838cbceb79909105,PodSandboxId:caad545b19c92b0e468a5d816dce32b675685a78cedab78700e55b51dadfab81,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726181494460266179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bed2bf6-b8e7-4289-a9c9-50a5a625f211 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.781149690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=442f37be-2cec-43a5-8587-650701fb484c name=/runtime.v1.RuntimeService/Version
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.781238651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=442f37be-2cec-43a5-8587-650701fb484c name=/runtime.v1.RuntimeService/Version
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.782659176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8e9d5ee-3f54-44b7-99c4-63bce176572a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.783131534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726181553783104039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8e9d5ee-3f54-44b7-99c4-63bce176572a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.784465460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc176079-64be-48f9-a860-cd7564fcc9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.784583763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc176079-64be-48f9-a860-cd7564fcc9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.784955812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45657f99c3836615e125d5c0b158a147dd4b96f02dcd80337e12331f36b43ce6,PodSandboxId:59066a476575c56175158d93a54ab2619f9439647ed6f8ff2b6328be047b3066,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550774695639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a09699ee6f6644bab3f4c0a0d97a9a9b4b8a29a9dfe21cd99933fdd7266263,PodSandboxId:330d08f0808247d6b22c49cccfc11d20cde17923ebe2ed05387f205de256b7a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550712718377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee02f22960aa6fd3d39d7a79a25565b6a40a50b964afe7a0b3d848971402fb7,PodSandboxId:981ed4f2247a4ab5b6d4d8e1c4ceb2db1bd2b9c4dabf03866a85f393052ef2f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1726181550323010519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cc2fc1eb420d84fdb7155822fd06cd9f6418492f1b3f261a08a80897b74be,PodSandboxId:6a70e18a006eb3ae5cbe4111ead8b2c730a1407a17b89ca3255f43cc6613752d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6181550298367486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf95dd12eab6e8b6f2c9a7e68c3b421414f4ef7ed80414f9f3dbecc18eaff165,PodSandboxId:0a1978da085e425be847af356cf0e8505d8f91530772ddf2b5b9769a0175f331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726181546512092947,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594808221f8d13b0b5753056d0106688dca09793c39e53823b1379ce16dbe712,PodSandboxId:e9b39850366cdbb911af5fd20c8531eebdbab223c4cb6a0b85dfea9958264dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726181546463729995,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfa0db3b057e699276d54b2b3f5d44221f518346fbf2dca56dd8c98322f266d,PodSandboxId:2611b402a6004328244aa9c2c095daf6a3c05ee6c310e8c72ed03a2bf9b3b934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726181546426085165,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f95c6dbfc59087141c6ef533959a9f0f9366f3a828779d409be5c2f6186dc6,PodSandboxId:9de2f677d172b1f8b3e72a5b13081592bae132145b88a91bc1d62a8f75234125,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726181546398034078,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14227a6604a60066486ffec4e25c57c4917e1629d20b25ac6257c75c763e7676,PodSandboxId:89baac321618621861fa5bc9a5b1e86aed0088cd574aa6e77e40ab8bf601cb87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726181510230616474,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc2de3020f3087d4fe759a3da05be8d513812925c77d693bc1094edbd099b81,PodSandboxId:c6793a68644a9ec24011a83db384f8e91e3510f0213ccfce3448003fb88b85d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726181509754134185,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ed18f0c67bb60d08430dd44e0b5f9adeb1bf24342a62203ce0036e152ecff0,PodSandboxId:eb683af7b5809f9482ef2afcb5ed3a69ddc54776b2fa4d704e1e65a2d3998472,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509201974056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2baa4067538d6d199fedd40def97917db42164cd5a730cdd1ed2222591a40a9,PodSandboxId:1acc362c60e7da39c6d51a6c7a01f4a7c95d1e828b2109ab7e01035b07f77e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726181509110762410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28979b57b18ce2b18220bdda064b24bc7556a55714187d190d646e22009b9fc6,PodSandboxId:cd13546beedac2e156df7201c16f7fa94f299148a8abf174189aa99cec6c62f2,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726181494588185196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50de479dde2151fb6746f94327c8ea7d74f2f1b69006bd7c4fe31936a724d9de,PodSandboxId:2dede6246aff5efc09e6ef9bae58da8808e49a59164b42cca4ff1c800538687f,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726181494570896787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a619795fc2a28a5def7ca251d6f2ada108f49f4cebc4e243470fa468a67a209,PodSandboxId:a79cbe904a0612a0519190b7e1c26b0ed84658eaa0eeed0a2d395f0422bd0c60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726181494540061054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fceda398d0f6969f50710641a0b86a27d686cc34969fea838cbceb79909105,PodSandboxId:caad545b19c92b0e468a5d816dce32b675685a78cedab78700e55b51dadfab81,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726181494460266179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc176079-64be-48f9-a860-cd7564fcc9d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.796091101Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8631ae2-1f31-4d8f-a6bc-747df20a3c40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.796385464Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:59066a476575c56175158d93a54ab2619f9439647ed6f8ff2b6328be047b3066,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhjpt,Uid:60d3d2ac-f60b-4bc3-8388-bb7031a11302,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181550239271161,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T22:52:29.781744687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:330d08f0808247d6b22c49cccfc11d20cde17923ebe2ed05387f205de256b7a3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rzjkg,Uid:500aa720-9d1b-4d40-a575-8e0fe9b97252,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181550236177681,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-12T22:52:29.781745696Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:981ed4f2247a4ab5b6d4d8e1c4ceb2db1bd2b9c4dabf03866a85f393052ef2f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-5kch4,Uid:8920bf37-f050-4a2f-aa8f-3a9d856fde36,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181550120108173,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-09-12T22:52:29.781740550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a70e18a006eb3ae5cbe4111ead8b2c730a1407a17b89ca3255f43cc6613752d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b04e1e14-c54d-4f17-bd35-a92ac9d321f0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181550100863049,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-12T22:52:29.781743420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a1978da085e425be847af356cf0e8505d8f91530772ddf2b5b9769a0175f331,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-848420,Uid:7fa0b026cf85e0a56b86ae8b4d1b24d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181546242099527,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,tier: control-plane,
},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.110:2379,kubernetes.io/config.hash: 7fa0b026cf85e0a56b86ae8b4d1b24d7,kubernetes.io/config.seen: 2024-09-12T22:52:25.773652989Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9de2f677d172b1f8b3e72a5b13081592bae132145b88a91bc1d62a8f75234125,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-848420,Uid:5a08e83ed6b2f9157f17cdf9b9641068,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181546241073206,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5a08e83ed6b2f9157f17cdf9b9641068,kubernetes.io/config.seen: 2024-09-12T22:52:25.773660757Z,kubernetes.io/config.s
ource: file,},RuntimeHandler:,},&PodSandbox{Id:e9b39850366cdbb911af5fd20c8531eebdbab223c4cb6a0b85dfea9958264dea,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-848420,Uid:9d2aa889ac851ad814db4dfa91238d92,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726181546238108176,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9d2aa889ac851ad814db4dfa91238d92,kubernetes.io/config.seen: 2024-09-12T22:52:25.773662054Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2611b402a6004328244aa9c2c095daf6a3c05ee6c310e8c72ed03a2bf9b3b934,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-848420,Uid:45fec635f096111e5ff0c5c065521359,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY
,CreatedAt:1726181546217183588,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.110:8443,kubernetes.io/config.hash: 45fec635f096111e5ff0c5c065521359,kubernetes.io/config.seen: 2024-09-12T22:52:25.773658999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f8631ae2-1f31-4d8f-a6bc-747df20a3c40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.797171319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4afdcb80-e64d-4fb2-bd8c-35e064588932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.797243236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4afdcb80-e64d-4fb2-bd8c-35e064588932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 22:52:33 kubernetes-upgrade-848420 crio[2233]: time="2024-09-12 22:52:33.797417861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45657f99c3836615e125d5c0b158a147dd4b96f02dcd80337e12331f36b43ce6,PodSandboxId:59066a476575c56175158d93a54ab2619f9439647ed6f8ff2b6328be047b3066,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550774695639,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dhjpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d3d2ac-f60b-4bc3-8388-bb7031a11302,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a09699ee6f6644bab3f4c0a0d97a9a9b4b8a29a9dfe21cd99933fdd7266263,PodSandboxId:330d08f0808247d6b22c49cccfc11d20cde17923ebe2ed05387f205de256b7a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726181550712718377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rzjkg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 500aa720-9d1b-4d40-a575-8e0fe9b97252,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee02f22960aa6fd3d39d7a79a25565b6a40a50b964afe7a0b3d848971402fb7,PodSandboxId:981ed4f2247a4ab5b6d4d8e1c4ceb2db1bd2b9c4dabf03866a85f393052ef2f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1726181550323010519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kch4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8920bf37-f050-4a2f-aa8f-3a9d856fde36,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5cc2fc1eb420d84fdb7155822fd06cd9f6418492f1b3f261a08a80897b74be,PodSandboxId:6a70e18a006eb3ae5cbe4111ead8b2c730a1407a17b89ca3255f43cc6613752d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
6181550298367486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04e1e14-c54d-4f17-bd35-a92ac9d321f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf95dd12eab6e8b6f2c9a7e68c3b421414f4ef7ed80414f9f3dbecc18eaff165,PodSandboxId:0a1978da085e425be847af356cf0e8505d8f91530772ddf2b5b9769a0175f331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726181546512092947,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fa0b026cf85e0a56b86ae8b4d1b24d7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594808221f8d13b0b5753056d0106688dca09793c39e53823b1379ce16dbe712,PodSandboxId:e9b39850366cdbb911af5fd20c8531eebdbab223c4cb6a0b85dfea9958264dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726181546463729995,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d2aa889ac851ad814db4dfa91238d92,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfa0db3b057e699276d54b2b3f5d44221f518346fbf2dca56dd8c98322f266d,PodSandboxId:2611b402a6004328244aa9c2c095daf6a3c05ee6c310e8c72ed03a2bf9b3b934,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726181546426085165,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45fec635f096111e5ff0c5c065521359,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f95c6dbfc59087141c6ef533959a9f0f9366f3a828779d409be5c2f6186dc6,PodSandboxId:9de2f677d172b1f8b3e72a5b13081592bae132145b88a91bc1d62a8f75234125,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726181546398034078,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-848420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a08e83ed6b2f9157f17cdf9b9641068,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4afdcb80-e64d-4fb2-bd8c-35e064588932 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45657f99c3836       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   1                   59066a476575c       coredns-7c65d6cfc9-dhjpt
	46a09699ee6f6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   1                   330d08f080824       coredns-7c65d6cfc9-rzjkg
	3ee02f22960aa       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                1                   981ed4f2247a4       kube-proxy-5kch4
	bb5cc2fc1eb42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   6a70e18a006eb       storage-provisioner
	cf95dd12eab6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      1                   0a1978da085e4       etcd-kubernetes-upgrade-848420
	594808221f8d1       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            1                   e9b39850366cd       kube-scheduler-kubernetes-upgrade-848420
	fdfa0db3b057e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            1                   2611b402a6004       kube-apiserver-kubernetes-upgrade-848420
	88f95c6dbfc59       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   1                   9de2f677d172b       kube-controller-manager-kubernetes-upgrade-848420
	14227a6604a60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   43 seconds ago      Exited              storage-provisioner       0                   89baac3216186       storage-provisioner
	bcc2de3020f30       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   44 seconds ago      Exited              kube-proxy                0                   c6793a68644a9       kube-proxy-5kch4
	64ed18f0c67bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   44 seconds ago      Exited              coredns                   0                   eb683af7b5809       coredns-7c65d6cfc9-dhjpt
	b2baa4067538d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   44 seconds ago      Exited              coredns                   0                   1acc362c60e7d       coredns-7c65d6cfc9-rzjkg
	28979b57b18ce       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   59 seconds ago      Exited              kube-scheduler            0                   cd13546beedac       kube-scheduler-kubernetes-upgrade-848420
	50de479dde215       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   59 seconds ago      Exited              etcd                      0                   2dede6246aff5       etcd-kubernetes-upgrade-848420
	9a619795fc2a2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   59 seconds ago      Exited              kube-controller-manager   0                   a79cbe904a061       kube-controller-manager-kubernetes-upgrade-848420
	57fceda398d0f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   59 seconds ago      Exited              kube-apiserver            0                   caad545b19c92       kube-apiserver-kubernetes-upgrade-848420
	
	
	==> coredns [45657f99c3836615e125d5c0b158a147dd4b96f02dcd80337e12331f36b43ce6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [46a09699ee6f6644bab3f4c0a0d97a9a9b4b8a29a9dfe21cd99933fdd7266263] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [64ed18f0c67bb60d08430dd44e0b5f9adeb1bf24342a62203ce0036e152ecff0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1727013221]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.652) (total time: 21792ms):
	Trace[1727013221]: [21.792468635s] [21.792468635s] END
	[INFO] plugin/kubernetes: Trace[105437076]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.652) (total time: 21794ms):
	Trace[105437076]: [21.794817527s] [21.794817527s] END
	[INFO] plugin/kubernetes: Trace[707622496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.652) (total time: 21793ms):
	Trace[707622496]: [21.793343581s] [21.793343581s] END
	
	
	==> coredns [b2baa4067538d6d199fedd40def97917db42164cd5a730cdd1ed2222591a40a9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[382334676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.613) (total time: 21831ms):
	Trace[382334676]: [21.831956211s] [21.831956211s] END
	[INFO] plugin/kubernetes: Trace[1941474933]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.613) (total time: 21831ms):
	Trace[1941474933]: [21.831791803s] [21.831791803s] END
	[INFO] plugin/kubernetes: Trace[1791427525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 22:51:49.619) (total time: 21825ms):
	Trace[1791427525]: [21.82547759s] [21.82547759s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-848420
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-848420
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:51:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-848420
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:52:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:52:29 +0000   Thu, 12 Sep 2024 22:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:52:29 +0000   Thu, 12 Sep 2024 22:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:52:29 +0000   Thu, 12 Sep 2024 22:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:52:29 +0000   Thu, 12 Sep 2024 22:51:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    kubernetes-upgrade-848420
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cef8385cf39f47968acfbaefae8bfc02
	  System UUID:                cef8385c-f39f-4796-8acf-baefae8bfc02
	  Boot ID:                    165100d5-8590-44c8-98e1-9e96ac99e33f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dhjpt                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     46s
	  kube-system                 coredns-7c65d6cfc9-rzjkg                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     46s
	  kube-system                 etcd-kubernetes-upgrade-848420                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         48s
	  kube-system                 kube-apiserver-kubernetes-upgrade-848420             250m (12%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-848420    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-5kch4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-scheduler-kubernetes-upgrade-848420             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    61s (x8 over 64s)  kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 64s)  kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  61s (x8 over 64s)  kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           50s                node-controller  Node kubernetes-upgrade-848420 event: Registered Node kubernetes-upgrade-848420 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-848420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-848420 event: Registered Node kubernetes-upgrade-848420 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.944095] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.063097] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053559] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.191143] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.144634] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.277750] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +4.115558] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +1.870906] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.059443] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.616293] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.851487] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +2.239605] kauditd_printk_skb: 73 callbacks suppressed
	[Sep12 22:52] systemd-fstab-generator[2158]: Ignoring "noauto" option for root device
	[  +0.084437] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.057850] systemd-fstab-generator[2170]: Ignoring "noauto" option for root device
	[  +0.200405] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
	[  +0.138835] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.288647] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +4.294312] systemd-fstab-generator[2369]: Ignoring "noauto" option for root device
	[  +0.091128] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.322960] systemd-fstab-generator[2492]: Ignoring "noauto" option for root device
	[  +4.537084] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.752215] systemd-fstab-generator[3399]: Ignoring "noauto" option for root device
	
	
	==> etcd [50de479dde2151fb6746f94327c8ea7d74f2f1b69006bd7c4fe31936a724d9de] <==
	{"level":"info","ts":"2024-09-12T22:51:35.286833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2024-09-12T22:51:35.288157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:51:35.290636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T22:51:35.292675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T22:51:35.292802Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:51:35.292852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:51:53.556612Z","caller":"traceutil/trace.go:171","msg":"trace[1627227310] linearizableReadLoop","detail":"{readStateIndex:384; appliedIndex:383; }","duration":"261.813357ms","start":"2024-09-12T22:51:53.294776Z","end":"2024-09-12T22:51:53.556589Z","steps":["trace[1627227310] 'read index received'  (duration: 261.473434ms)","trace[1627227310] 'applied index is now lower than readState.Index'  (duration: 339.176µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T22:51:53.556874Z","caller":"traceutil/trace.go:171","msg":"trace[1984055551] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"335.637964ms","start":"2024-09-12T22:51:53.221225Z","end":"2024-09-12T22:51:53.556863Z","steps":["trace[1984055551] 'process raft request'  (duration: 335.164372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:51:53.556972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.091356ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T22:51:53.557080Z","caller":"traceutil/trace.go:171","msg":"trace[1283573454] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:374; }","duration":"262.297414ms","start":"2024-09-12T22:51:53.294770Z","end":"2024-09-12T22:51:53.557068Z","steps":["trace[1283573454] 'agreement among raft nodes before linearized reading'  (duration: 262.014958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:51:53.560066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:51:53.221206Z","time spent":"335.718027ms","remote":"127.0.0.1:42400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7086,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" mod_revision:288 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" value_size:7011 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" > >"}
	{"level":"warn","ts":"2024-09-12T22:51:54.259320Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.093984ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932447640204130944 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" mod_revision:374 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" value_size:6819 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-12T22:51:54.259447Z","caller":"traceutil/trace.go:171","msg":"trace[1615793015] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"688.224723ms","start":"2024-09-12T22:51:53.571185Z","end":"2024-09-12T22:51:54.259409Z","steps":["trace[1615793015] 'process raft request'  (duration: 277.79664ms)","trace[1615793015] 'compare'  (duration: 409.945735ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T22:51:54.259567Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T22:51:53.571161Z","time spent":"688.314907ms","remote":"127.0.0.1:42400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6894,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" mod_revision:374 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" value_size:6819 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-848420\" > >"}
	{"level":"warn","ts":"2024-09-12T22:51:54.792805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.731131ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932447640204130949 > lease_revoke:<id:259891e86f50b926>","response":"size:29"}
	{"level":"info","ts":"2024-09-12T22:52:11.446780Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-12T22:52:11.446820Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-848420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"]}
	{"level":"warn","ts":"2024-09-12T22:52:11.446958Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.110:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:52:11.446996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.110:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:52:11.447057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-12T22:52:11.447139Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-12T22:52:11.512002Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fbb007bab925a598","current-leader-member-id":"fbb007bab925a598"}
	{"level":"info","ts":"2024-09-12T22:52:11.514483Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-09-12T22:52:11.514647Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-09-12T22:52:11.514663Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-848420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"]}
	
	
	==> etcd [cf95dd12eab6e8b6f2c9a7e68c3b421414f4ef7ed80414f9f3dbecc18eaff165] <==
	{"level":"info","ts":"2024-09-12T22:52:27.044439Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:52:27.044552Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:52:27.044620Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-12T22:52:27.045042Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-09-12T22:52:27.045113Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-09-12T22:52:27.049273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 switched to configuration voters=(18136004197972551064)"}
	{"level":"info","ts":"2024-09-12T22:52:27.049349Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2024-09-12T22:52:27.049451Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:52:27.049660Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:52:27.996647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-12T22:52:27.996687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-12T22:52:27.996731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2024-09-12T22:52:27.996766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2024-09-12T22:52:27.996772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-09-12T22:52:27.996780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2024-09-12T22:52:27.996787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-09-12T22:52:28.003330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:kubernetes-upgrade-848420 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T22:52:28.003616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:52:28.004572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:52:28.004903Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T22:52:28.004943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T22:52:28.005422Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:52:28.006232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T22:52:28.007678Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:52:28.008559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	
	
	==> kernel <==
	 22:52:34 up 1 min,  0 users,  load average: 1.71, 0.59, 0.21
	Linux kubernetes-upgrade-848420 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57fceda398d0f6969f50710641a0b86a27d686cc34969fea838cbceb79909105] <==
	I0912 22:52:11.457090       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0912 22:52:11.457103       1 controller.go:132] Ending legacy_token_tracking_controller
	I0912 22:52:11.457108       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0912 22:52:11.457121       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0912 22:52:11.457141       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0912 22:52:11.457180       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0912 22:52:11.457197       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0912 22:52:11.457208       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0912 22:52:11.457634       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:52:11.457663       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0912 22:52:11.458614       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0912 22:52:11.458707       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0912 22:52:11.458725       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0912 22:52:11.459034       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0912 22:52:11.459059       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0912 22:52:11.459255       1 controller.go:157] Shutting down quota evaluator
	I0912 22:52:11.459288       1 controller.go:176] quota evaluator worker shutdown
	I0912 22:52:11.459383       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0912 22:52:11.459869       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:52:11.464890       1 controller.go:176] quota evaluator worker shutdown
	I0912 22:52:11.465015       1 controller.go:176] quota evaluator worker shutdown
	I0912 22:52:11.465049       1 controller.go:176] quota evaluator worker shutdown
	I0912 22:52:11.465132       1 controller.go:176] quota evaluator worker shutdown
	W0912 22:52:11.472815       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:52:11.472815       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fdfa0db3b057e699276d54b2b3f5d44221f518346fbf2dca56dd8c98322f266d] <==
	I0912 22:52:29.466336       1 policy_source.go:224] refreshing policies
	I0912 22:52:29.466472       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0912 22:52:29.466624       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0912 22:52:29.466689       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0912 22:52:29.466922       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:52:29.467547       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:52:29.472656       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0912 22:52:29.473062       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0912 22:52:29.490756       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0912 22:52:29.521488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0912 22:52:29.521559       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0912 22:52:29.525389       1 shared_informer.go:320] Caches are synced for configmaps
	I0912 22:52:29.529207       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0912 22:52:29.529348       1 aggregator.go:171] initial CRD sync complete...
	I0912 22:52:29.529379       1 autoregister_controller.go:144] Starting autoregister controller
	I0912 22:52:29.529401       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:52:29.529423       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:52:30.327463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:52:30.576938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0912 22:52:31.578554       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0912 22:52:31.594141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0912 22:52:31.652188       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0912 22:52:31.750227       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:52:31.757193       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 22:52:32.784945       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [88f95c6dbfc59087141c6ef533959a9f0f9366f3a828779d409be5c2f6186dc6] <==
	I0912 22:52:32.783545       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0912 22:52:32.785546       1 shared_informer.go:320] Caches are synced for node
	I0912 22:52:32.785671       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0912 22:52:32.785713       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0912 22:52:32.785737       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0912 22:52:32.785762       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0912 22:52:32.785887       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0912 22:52:32.786054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-848420"
	I0912 22:52:32.789219       1 shared_informer.go:320] Caches are synced for daemon sets
	I0912 22:52:32.796586       1 shared_informer.go:320] Caches are synced for job
	I0912 22:52:32.796702       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0912 22:52:32.802305       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0912 22:52:32.805459       1 shared_informer.go:320] Caches are synced for endpoint
	I0912 22:52:32.810144       1 shared_informer.go:320] Caches are synced for GC
	I0912 22:52:32.892332       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 22:52:32.933767       1 shared_informer.go:320] Caches are synced for disruption
	I0912 22:52:32.950581       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0912 22:52:32.958802       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 22:52:33.100762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="303.95583ms"
	I0912 22:52:33.101072       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.725µs"
	I0912 22:52:33.417052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.315068ms"
	I0912 22:52:33.419095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.979499ms"
	I0912 22:52:33.430788       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 22:52:33.430868       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0912 22:52:33.442801       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [9a619795fc2a28a5def7ca251d6f2ada108f49f4cebc4e243470fa468a67a209] <==
	I0912 22:51:44.268664       1 shared_informer.go:320] Caches are synced for node
	I0912 22:51:44.268770       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0912 22:51:44.268812       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0912 22:51:44.268835       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0912 22:51:44.268858       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0912 22:51:44.275311       1 shared_informer.go:320] Caches are synced for persistent volume
	I0912 22:51:44.297915       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-848420" podCIDRs=["10.244.0.0/24"]
	I0912 22:51:44.297964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-848420"
	I0912 22:51:44.298063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-848420"
	I0912 22:51:44.315101       1 shared_informer.go:320] Caches are synced for attach detach
	I0912 22:51:44.454732       1 shared_informer.go:320] Caches are synced for disruption
	I0912 22:51:44.459677       1 shared_informer.go:320] Caches are synced for stateful set
	I0912 22:51:44.461914       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 22:51:44.462736       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 22:51:44.567440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-848420"
	I0912 22:51:44.894853       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 22:51:44.954739       1 shared_informer.go:320] Caches are synced for garbage collector
	I0912 22:51:44.954777       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0912 22:51:47.909046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-848420"
	I0912 22:51:48.196847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="227.869788ms"
	I0912 22:51:48.283638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.531633ms"
	I0912 22:51:48.284154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="462.128µs"
	I0912 22:51:48.364989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.756µs"
	I0912 22:51:50.018182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="112.877µs"
	I0912 22:51:50.053727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="91.437µs"
	
	
	==> kube-proxy [3ee02f22960aa6fd3d39d7a79a25565b6a40a50b964afe7a0b3d848971402fb7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:52:30.901647       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:52:30.917379       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0912 22:52:30.917478       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:52:30.996610       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:52:30.996660       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:52:30.996686       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:52:31.002871       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:52:31.004600       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:52:31.004659       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:52:31.014380       1 config.go:199] "Starting service config controller"
	I0912 22:52:31.014426       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:52:31.014460       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:52:31.014476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:52:31.017345       1 config.go:328] "Starting node config controller"
	I0912 22:52:31.017376       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:52:31.116175       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:52:31.116528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:52:31.119026       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [bcc2de3020f3087d4fe759a3da05be8d513812925c77d693bc1094edbd099b81] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 22:51:50.277316       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 22:51:50.339303       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0912 22:51:50.339420       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:51:50.393908       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 22:51:50.394004       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 22:51:50.394042       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:51:50.398062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:51:50.399347       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:51:50.399405       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:51:50.402914       1 config.go:199] "Starting service config controller"
	I0912 22:51:50.403591       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:51:50.403868       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:51:50.403896       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:51:50.405578       1 config.go:328] "Starting node config controller"
	I0912 22:51:50.405601       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:51:50.504672       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:51:50.504678       1 shared_informer.go:320] Caches are synced for service config
	I0912 22:51:50.506255       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [28979b57b18ce2b18220bdda064b24bc7556a55714187d190d646e22009b9fc6] <==
	E0912 22:51:37.592207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:37.592323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:51:37.592346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.512673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:51:38.512724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.520609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 22:51:38.520725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.549610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:51:38.549659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.657251       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:51:38.657344       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 22:51:38.663286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 22:51:38.665171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.680126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 22:51:38.680185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.686899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:51:38.686958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.717560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 22:51:38.717621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.784861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 22:51:38.784910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:51:38.809815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:51:38.809862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 22:51:41.371805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0912 22:52:11.442925       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [594808221f8d13b0b5753056d0106688dca09793c39e53823b1379ce16dbe712] <==
	I0912 22:52:27.545976       1 serving.go:386] Generated self-signed cert in-memory
	I0912 22:52:29.491405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 22:52:29.491536       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:52:29.504992       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0912 22:52:29.505189       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0912 22:52:29.505109       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 22:52:29.505395       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 22:52:29.505058       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 22:52:29.505082       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 22:52:29.505118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0912 22:52:29.505680       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0912 22:52:29.606007       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0912 22:52:29.606128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0912 22:52:29.606150       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996667    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a08e83ed6b2f9157f17cdf9b9641068-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-848420\" (UID: \"5a08e83ed6b2f9157f17cdf9b9641068\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996684    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7fa0b026cf85e0a56b86ae8b4d1b24d7-etcd-data\") pod \"etcd-kubernetes-upgrade-848420\" (UID: \"7fa0b026cf85e0a56b86ae8b4d1b24d7\") " pod="kube-system/etcd-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996702    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45fec635f096111e5ff0c5c065521359-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-848420\" (UID: \"45fec635f096111e5ff0c5c065521359\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996723    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45fec635f096111e5ff0c5c065521359-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-848420\" (UID: \"45fec635f096111e5ff0c5c065521359\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996750    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a08e83ed6b2f9157f17cdf9b9641068-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-848420\" (UID: \"5a08e83ed6b2f9157f17cdf9b9641068\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996809    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a08e83ed6b2f9157f17cdf9b9641068-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-848420\" (UID: \"5a08e83ed6b2f9157f17cdf9b9641068\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996849    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d2aa889ac851ad814db4dfa91238d92-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-848420\" (UID: \"9d2aa889ac851ad814db4dfa91238d92\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996874    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7fa0b026cf85e0a56b86ae8b4d1b24d7-etcd-certs\") pod \"etcd-kubernetes-upgrade-848420\" (UID: \"7fa0b026cf85e0a56b86ae8b4d1b24d7\") " pod="kube-system/etcd-kubernetes-upgrade-848420"
	Sep 12 22:52:25 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:25.996890    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45fec635f096111e5ff0c5c065521359-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-848420\" (UID: \"45fec635f096111e5ff0c5c065521359\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-848420"
	Sep 12 22:52:26 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:26.165312    2499 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-848420"
	Sep 12 22:52:26 kubernetes-upgrade-848420 kubelet[2499]: E0912 22:52:26.166536    2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.110:8443: connect: connection refused" node="kubernetes-upgrade-848420"
	Sep 12 22:52:26 kubernetes-upgrade-848420 kubelet[2499]: E0912 22:52:26.398381    2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-848420?timeout=10s\": dial tcp 192.168.39.110:8443: connect: connection refused" interval="800ms"
	Sep 12 22:52:26 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:26.568187    2499 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-848420"
	Sep 12 22:52:26 kubernetes-upgrade-848420 kubelet[2499]: E0912 22:52:26.570005    2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.110:8443: connect: connection refused" node="kubernetes-upgrade-848420"
	Sep 12 22:52:27 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:27.371375    2499 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-848420"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.508451    2499 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-848420"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.508900    2499 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-848420"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.508968    2499 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.510051    2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.777038    2499 apiserver.go:52] "Watching apiserver"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.785623    2499 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.847471    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b04e1e14-c54d-4f17-bd35-a92ac9d321f0-tmp\") pod \"storage-provisioner\" (UID: \"b04e1e14-c54d-4f17-bd35-a92ac9d321f0\") " pod="kube-system/storage-provisioner"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.847554    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8920bf37-f050-4a2f-aa8f-3a9d856fde36-lib-modules\") pod \"kube-proxy-5kch4\" (UID: \"8920bf37-f050-4a2f-aa8f-3a9d856fde36\") " pod="kube-system/kube-proxy-5kch4"
	Sep 12 22:52:29 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:29.847596    2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8920bf37-f050-4a2f-aa8f-3a9d856fde36-xtables-lock\") pod \"kube-proxy-5kch4\" (UID: \"8920bf37-f050-4a2f-aa8f-3a9d856fde36\") " pod="kube-system/kube-proxy-5kch4"
	Sep 12 22:52:33 kubernetes-upgrade-848420 kubelet[2499]: I0912 22:52:33.336996    2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [14227a6604a60066486ffec4e25c57c4917e1629d20b25ac6257c75c763e7676] <==
	I0912 22:51:50.373376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [bb5cc2fc1eb420d84fdb7155822fd06cd9f6418492f1b3f261a08a80897b74be] <==
	I0912 22:52:30.506579       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 22:52:30.548270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 22:52:30.548374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 22:52:30.589121       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 22:52:30.589588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-848420_873d70f3-1820-4739-9ab0-13052a3968f3!
	I0912 22:52:30.590292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b34dbfc0-65c5-42ba-9684-be399c5686e9", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-848420_873d70f3-1820-4739-9ab0-13052a3968f3 became leader
	I0912 22:52:30.693622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-848420_873d70f3-1820-4739-9ab0-13052a3968f3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:52:33.251844   58335 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19616-5891/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-848420 -n kubernetes-upgrade-848420
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-848420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-848420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-848420
--- FAIL: TestKubernetesUpgrade (406.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (280.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0912 22:52:07.200084   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m40.159066687s)

                                                
                                                
-- stdout --
	* [old-k8s-version-642238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-642238" primary control-plane node in "old-k8s-version-642238" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:52:06.297222   57925 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:52:06.297317   57925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:06.297322   57925 out.go:358] Setting ErrFile to fd 2...
	I0912 22:52:06.297326   57925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:06.297533   57925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:52:06.298209   57925 out.go:352] Setting JSON to false
	I0912 22:52:06.299145   57925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5668,"bootTime":1726175858,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:52:06.299207   57925 start.go:139] virtualization: kvm guest
	I0912 22:52:06.301320   57925 out.go:177] * [old-k8s-version-642238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:52:06.302427   57925 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:52:06.302437   57925 notify.go:220] Checking for updates...
	I0912 22:52:06.304296   57925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:52:06.305335   57925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:52:06.306310   57925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:06.307399   57925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:52:06.308289   57925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:52:06.309916   57925 config.go:182] Loaded profile config "NoKubernetes-204793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0912 22:52:06.310050   57925 config.go:182] Loaded profile config "cert-expiration-408779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:52:06.310172   57925 config.go:182] Loaded profile config "kubernetes-upgrade-848420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:52:06.310287   57925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:52:06.348089   57925 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 22:52:06.349163   57925 start.go:297] selected driver: kvm2
	I0912 22:52:06.349187   57925 start.go:901] validating driver "kvm2" against <nil>
	I0912 22:52:06.349198   57925 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:52:06.349924   57925 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:52:06.350003   57925 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:52:06.365544   57925 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:52:06.365596   57925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:52:06.365842   57925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:52:06.365900   57925 cni.go:84] Creating CNI manager for ""
	I0912 22:52:06.365913   57925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:52:06.365920   57925 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 22:52:06.365971   57925 start.go:340] cluster config:
	{Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:52:06.366069   57925 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:52:06.367696   57925 out.go:177] * Starting "old-k8s-version-642238" primary control-plane node in "old-k8s-version-642238" cluster
	I0912 22:52:06.368558   57925 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 22:52:06.368585   57925 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0912 22:52:06.368596   57925 cache.go:56] Caching tarball of preloaded images
	I0912 22:52:06.368682   57925 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:52:06.368692   57925 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0912 22:52:06.368771   57925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 22:52:06.368788   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json: {Name:mka1e1328f9dc2cedc0fa574fbaea819fe56b836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:06.368905   57925 start.go:360] acquireMachinesLock for old-k8s-version-642238: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 22:52:17.678344   57925 start.go:364] duration metric: took 11.309381465s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 22:52:17.678417   57925 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:52:17.678539   57925 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 22:52:17.680621   57925 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 22:52:17.680857   57925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:52:17.680940   57925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:52:17.704989   57925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0912 22:52:17.705558   57925 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:52:17.706413   57925 main.go:141] libmachine: Using API Version  1
	I0912 22:52:17.706435   57925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:52:17.706853   57925 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:52:17.707247   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 22:52:17.707528   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:17.707828   57925 start.go:159] libmachine.API.Create for "old-k8s-version-642238" (driver="kvm2")
	I0912 22:52:17.707869   57925 client.go:168] LocalClient.Create starting
	I0912 22:52:17.707907   57925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 22:52:17.707964   57925 main.go:141] libmachine: Decoding PEM data...
	I0912 22:52:17.707993   57925 main.go:141] libmachine: Parsing certificate...
	I0912 22:52:17.708070   57925 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 22:52:17.708104   57925 main.go:141] libmachine: Decoding PEM data...
	I0912 22:52:17.708125   57925 main.go:141] libmachine: Parsing certificate...
	I0912 22:52:17.708151   57925 main.go:141] libmachine: Running pre-create checks...
	I0912 22:52:17.708163   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .PreCreateCheck
	I0912 22:52:17.708667   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 22:52:17.709402   57925 main.go:141] libmachine: Creating machine...
	I0912 22:52:17.709422   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .Create
	I0912 22:52:17.709632   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating KVM machine...
	I0912 22:52:17.711251   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found existing default KVM network
	I0912 22:52:17.712567   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.712362   58109 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0b:ee:fc} reservation:<nil>}
	I0912 22:52:17.713490   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.713383   58109 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:00:cf} reservation:<nil>}
	I0912 22:52:17.714677   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.714576   58109 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dc400}
	I0912 22:52:17.714705   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | created network xml: 
	I0912 22:52:17.714719   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | <network>
	I0912 22:52:17.714728   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <name>mk-old-k8s-version-642238</name>
	I0912 22:52:17.714746   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <dns enable='no'/>
	I0912 22:52:17.714753   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   
	I0912 22:52:17.714763   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0912 22:52:17.714771   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |     <dhcp>
	I0912 22:52:17.714800   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0912 22:52:17.714822   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |     </dhcp>
	I0912 22:52:17.714835   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   </ip>
	I0912 22:52:17.714841   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG |   
	I0912 22:52:17.714850   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | </network>
	I0912 22:52:17.714860   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | 
	I0912 22:52:17.721969   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | trying to create private KVM network mk-old-k8s-version-642238 192.168.61.0/24...
	I0912 22:52:17.803623   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | private KVM network mk-old-k8s-version-642238 192.168.61.0/24 created
	I0912 22:52:17.803661   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 ...
	I0912 22:52:17.803680   57925 main.go:141] libmachine: (old-k8s-version-642238) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 22:52:17.803694   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:17.803634   58109 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:17.803856   57925 main.go:141] libmachine: (old-k8s-version-642238) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 22:52:18.049345   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.049213   58109 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa...
	I0912 22:52:18.168144   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.167982   58109 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/old-k8s-version-642238.rawdisk...
	I0912 22:52:18.168214   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Writing magic tar header
	I0912 22:52:18.168232   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 (perms=drwx------)
	I0912 22:52:18.168241   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Writing SSH key tar header
	I0912 22:52:18.168258   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:18.168107   58109 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238 ...
	I0912 22:52:18.168271   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238
	I0912 22:52:18.168285   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 22:52:18.168302   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:52:18.168334   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 22:52:18.168352   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 22:52:18.168366   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 22:52:18.168379   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 22:52:18.168392   57925 main.go:141] libmachine: (old-k8s-version-642238) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 22:52:18.168400   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 22:52:18.168413   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 22:52:18.168420   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home/jenkins
	I0912 22:52:18.168429   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Checking permissions on dir: /home
	I0912 22:52:18.168437   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Skipping /home - not owner
	I0912 22:52:18.168452   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 22:52:18.169653   57925 main.go:141] libmachine: (old-k8s-version-642238) define libvirt domain using xml: 
	I0912 22:52:18.169703   57925 main.go:141] libmachine: (old-k8s-version-642238) <domain type='kvm'>
	I0912 22:52:18.169718   57925 main.go:141] libmachine: (old-k8s-version-642238)   <name>old-k8s-version-642238</name>
	I0912 22:52:18.169730   57925 main.go:141] libmachine: (old-k8s-version-642238)   <memory unit='MiB'>2200</memory>
	I0912 22:52:18.169742   57925 main.go:141] libmachine: (old-k8s-version-642238)   <vcpu>2</vcpu>
	I0912 22:52:18.169755   57925 main.go:141] libmachine: (old-k8s-version-642238)   <features>
	I0912 22:52:18.169791   57925 main.go:141] libmachine: (old-k8s-version-642238)     <acpi/>
	I0912 22:52:18.169813   57925 main.go:141] libmachine: (old-k8s-version-642238)     <apic/>
	I0912 22:52:18.169836   57925 main.go:141] libmachine: (old-k8s-version-642238)     <pae/>
	I0912 22:52:18.169848   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.169876   57925 main.go:141] libmachine: (old-k8s-version-642238)   </features>
	I0912 22:52:18.169889   57925 main.go:141] libmachine: (old-k8s-version-642238)   <cpu mode='host-passthrough'>
	I0912 22:52:18.169901   57925 main.go:141] libmachine: (old-k8s-version-642238)   
	I0912 22:52:18.169909   57925 main.go:141] libmachine: (old-k8s-version-642238)   </cpu>
	I0912 22:52:18.169919   57925 main.go:141] libmachine: (old-k8s-version-642238)   <os>
	I0912 22:52:18.169936   57925 main.go:141] libmachine: (old-k8s-version-642238)     <type>hvm</type>
	I0912 22:52:18.169949   57925 main.go:141] libmachine: (old-k8s-version-642238)     <boot dev='cdrom'/>
	I0912 22:52:18.169960   57925 main.go:141] libmachine: (old-k8s-version-642238)     <boot dev='hd'/>
	I0912 22:52:18.169971   57925 main.go:141] libmachine: (old-k8s-version-642238)     <bootmenu enable='no'/>
	I0912 22:52:18.169981   57925 main.go:141] libmachine: (old-k8s-version-642238)   </os>
	I0912 22:52:18.169991   57925 main.go:141] libmachine: (old-k8s-version-642238)   <devices>
	I0912 22:52:18.170018   57925 main.go:141] libmachine: (old-k8s-version-642238)     <disk type='file' device='cdrom'>
	I0912 22:52:18.170035   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/boot2docker.iso'/>
	I0912 22:52:18.170055   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target dev='hdc' bus='scsi'/>
	I0912 22:52:18.170071   57925 main.go:141] libmachine: (old-k8s-version-642238)       <readonly/>
	I0912 22:52:18.170089   57925 main.go:141] libmachine: (old-k8s-version-642238)     </disk>
	I0912 22:52:18.170099   57925 main.go:141] libmachine: (old-k8s-version-642238)     <disk type='file' device='disk'>
	I0912 22:52:18.170126   57925 main.go:141] libmachine: (old-k8s-version-642238)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 22:52:18.170150   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/old-k8s-version-642238.rawdisk'/>
	I0912 22:52:18.170164   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target dev='hda' bus='virtio'/>
	I0912 22:52:18.170175   57925 main.go:141] libmachine: (old-k8s-version-642238)     </disk>
	I0912 22:52:18.170189   57925 main.go:141] libmachine: (old-k8s-version-642238)     <interface type='network'>
	I0912 22:52:18.170201   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source network='mk-old-k8s-version-642238'/>
	I0912 22:52:18.170215   57925 main.go:141] libmachine: (old-k8s-version-642238)       <model type='virtio'/>
	I0912 22:52:18.170226   57925 main.go:141] libmachine: (old-k8s-version-642238)     </interface>
	I0912 22:52:18.170240   57925 main.go:141] libmachine: (old-k8s-version-642238)     <interface type='network'>
	I0912 22:52:18.170251   57925 main.go:141] libmachine: (old-k8s-version-642238)       <source network='default'/>
	I0912 22:52:18.170263   57925 main.go:141] libmachine: (old-k8s-version-642238)       <model type='virtio'/>
	I0912 22:52:18.170274   57925 main.go:141] libmachine: (old-k8s-version-642238)     </interface>
	I0912 22:52:18.170285   57925 main.go:141] libmachine: (old-k8s-version-642238)     <serial type='pty'>
	I0912 22:52:18.170294   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target port='0'/>
	I0912 22:52:18.170300   57925 main.go:141] libmachine: (old-k8s-version-642238)     </serial>
	I0912 22:52:18.170318   57925 main.go:141] libmachine: (old-k8s-version-642238)     <console type='pty'>
	I0912 22:52:18.170330   57925 main.go:141] libmachine: (old-k8s-version-642238)       <target type='serial' port='0'/>
	I0912 22:52:18.170369   57925 main.go:141] libmachine: (old-k8s-version-642238)     </console>
	I0912 22:52:18.170387   57925 main.go:141] libmachine: (old-k8s-version-642238)     <rng model='virtio'>
	I0912 22:52:18.170398   57925 main.go:141] libmachine: (old-k8s-version-642238)       <backend model='random'>/dev/random</backend>
	I0912 22:52:18.170409   57925 main.go:141] libmachine: (old-k8s-version-642238)     </rng>
	I0912 22:52:18.170421   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.170434   57925 main.go:141] libmachine: (old-k8s-version-642238)     
	I0912 22:52:18.170447   57925 main.go:141] libmachine: (old-k8s-version-642238)   </devices>
	I0912 22:52:18.170457   57925 main.go:141] libmachine: (old-k8s-version-642238) </domain>
	I0912 22:52:18.170471   57925 main.go:141] libmachine: (old-k8s-version-642238) 
	I0912 22:52:18.174858   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:53:a7:4a in network default
	I0912 22:52:18.175729   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 22:52:18.175751   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:18.176532   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 22:52:18.176945   57925 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 22:52:18.177643   57925 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 22:52:18.178456   57925 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 22:52:19.473797   57925 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 22:52:19.474480   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:19.474872   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:19.474892   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:19.474863   58109 retry.go:31] will retry after 286.330243ms: waiting for machine to come up
	I0912 22:52:19.762384   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:19.763004   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:19.763033   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:19.762958   58109 retry.go:31] will retry after 248.581952ms: waiting for machine to come up
	I0912 22:52:20.013442   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.013919   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.013943   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.013874   58109 retry.go:31] will retry after 423.642047ms: waiting for machine to come up
	I0912 22:52:20.439556   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.440066   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.440095   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.439978   58109 retry.go:31] will retry after 424.276527ms: waiting for machine to come up
	I0912 22:52:20.865441   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:20.865947   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:20.865981   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:20.865899   58109 retry.go:31] will retry after 731.460727ms: waiting for machine to come up
	I0912 22:52:21.598729   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:21.599316   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:21.599345   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:21.599251   58109 retry.go:31] will retry after 627.251312ms: waiting for machine to come up
	I0912 22:52:22.228066   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:22.228710   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:22.228741   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:22.228654   58109 retry.go:31] will retry after 742.568865ms: waiting for machine to come up
	I0912 22:52:22.972970   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:22.973481   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:22.973510   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:22.973432   58109 retry.go:31] will retry after 1.239439432s: waiting for machine to come up
	I0912 22:52:24.214810   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:24.215291   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:24.215350   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:24.215279   58109 retry.go:31] will retry after 1.485467947s: waiting for machine to come up
	I0912 22:52:25.702925   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:25.703370   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:25.703405   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:25.703316   58109 retry.go:31] will retry after 2.147552553s: waiting for machine to come up
	I0912 22:52:27.852616   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:27.853179   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:27.853228   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:27.853153   58109 retry.go:31] will retry after 2.593428158s: waiting for machine to come up
	I0912 22:52:30.447820   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:30.448391   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:30.448430   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:30.448337   58109 retry.go:31] will retry after 3.337034208s: waiting for machine to come up
	I0912 22:52:33.787486   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:33.787953   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:33.787986   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:33.787904   58109 retry.go:31] will retry after 2.976452066s: waiting for machine to come up
	I0912 22:52:36.767192   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:36.767796   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 22:52:36.767826   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 22:52:36.767747   58109 retry.go:31] will retry after 4.491525175s: waiting for machine to come up
	I0912 22:52:41.263611   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.264228   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.264257   57925 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 22:52:41.264271   57925 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 22:52:41.264726   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238
	I0912 22:52:41.339270   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 22:52:41.339303   57925 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 22:52:41.339320   57925 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 22:52:41.342293   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.342661   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.342693   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.342867   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 22:52:41.342902   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 22:52:41.342939   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 22:52:41.342960   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 22:52:41.342989   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 22:52:41.469537   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 22:52:41.469798   57925 main.go:141] libmachine: (old-k8s-version-642238) KVM machine creation complete!
	I0912 22:52:41.470098   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 22:52:41.470662   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:41.470867   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:41.471041   57925 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 22:52:41.471057   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 22:52:41.472315   57925 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 22:52:41.472328   57925 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 22:52:41.472334   57925 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 22:52:41.472339   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:41.474453   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.474790   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.474813   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.475018   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:41.475184   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.475347   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.475485   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:41.475668   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:41.475844   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:41.475854   57925 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 22:52:41.585116   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:52:41.585139   57925 main.go:141] libmachine: Detecting the provisioner...
	I0912 22:52:41.585146   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:41.587770   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.588088   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.588113   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.588293   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:41.588479   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.588636   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.588737   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:41.588879   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:41.589061   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:41.589072   57925 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 22:52:41.698156   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 22:52:41.698217   57925 main.go:141] libmachine: found compatible host: buildroot
	I0912 22:52:41.698229   57925 main.go:141] libmachine: Provisioning with buildroot...
	I0912 22:52:41.698244   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 22:52:41.698472   57925 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 22:52:41.698497   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 22:52:41.698709   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:41.701394   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.701875   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.701902   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.702110   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:41.702324   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.702499   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.702663   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:41.702845   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:41.703008   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:41.703019   57925 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 22:52:41.823243   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 22:52:41.823279   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:41.825983   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.826401   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.826433   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.826598   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:41.826783   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.826949   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:41.827125   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:41.827301   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:41.827513   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:41.827532   57925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:52:41.946923   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:52:41.946956   57925 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 22:52:41.947018   57925 buildroot.go:174] setting up certificates
	I0912 22:52:41.947031   57925 provision.go:84] configureAuth start
	I0912 22:52:41.947049   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 22:52:41.947368   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 22:52:41.950472   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.950874   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.950904   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.951015   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:41.953142   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.953522   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:41.953549   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:41.953763   57925 provision.go:143] copyHostCerts
	I0912 22:52:41.953829   57925 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 22:52:41.953841   57925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 22:52:41.953899   57925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 22:52:41.953985   57925 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 22:52:41.953992   57925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 22:52:41.954011   57925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 22:52:41.954057   57925 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 22:52:41.954064   57925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 22:52:41.954080   57925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 22:52:41.954121   57925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 22:52:42.074469   57925 provision.go:177] copyRemoteCerts
	I0912 22:52:42.074525   57925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:52:42.074550   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.077329   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.077789   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.077811   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.077995   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.078204   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.078391   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.078614   57925 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 22:52:42.164306   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:52:42.187713   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 22:52:42.210253   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:52:42.233998   57925 provision.go:87] duration metric: took 286.949458ms to configureAuth
	I0912 22:52:42.234033   57925 buildroot.go:189] setting minikube options for container-runtime
	I0912 22:52:42.234215   57925 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 22:52:42.234278   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.236627   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.236899   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.236933   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.237214   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.237422   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.237595   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.237744   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.237929   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:42.238143   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:42.238162   57925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:52:42.462050   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:52:42.462076   57925 main.go:141] libmachine: Checking connection to Docker...
	I0912 22:52:42.462087   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetURL
	I0912 22:52:42.463358   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using libvirt version 6000000
	I0912 22:52:42.465912   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.466315   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.466345   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.466546   57925 main.go:141] libmachine: Docker is up and running!
	I0912 22:52:42.466559   57925 main.go:141] libmachine: Reticulating splines...
	I0912 22:52:42.466565   57925 client.go:171] duration metric: took 24.758689755s to LocalClient.Create
	I0912 22:52:42.466586   57925 start.go:167] duration metric: took 24.758760221s to libmachine.API.Create "old-k8s-version-642238"
	I0912 22:52:42.466598   57925 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 22:52:42.466623   57925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:52:42.466641   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:42.466875   57925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:52:42.466899   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.468858   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.469200   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.469244   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.469327   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.469512   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.469706   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.469840   57925 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 22:52:42.559595   57925 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:52:42.565483   57925 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 22:52:42.565508   57925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 22:52:42.565590   57925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 22:52:42.565695   57925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 22:52:42.565801   57925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:52:42.575169   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:52:42.598163   57925 start.go:296] duration metric: took 131.55045ms for postStartSetup
	I0912 22:52:42.598206   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 22:52:42.598763   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 22:52:42.601072   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.601372   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.601400   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.601646   57925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 22:52:42.601837   57925 start.go:128] duration metric: took 24.923286821s to createHost
	I0912 22:52:42.601858   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.604172   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.604496   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.604530   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.604728   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.604917   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.605100   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.605273   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.605459   57925 main.go:141] libmachine: Using SSH client type: native
	I0912 22:52:42.605658   57925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 22:52:42.605669   57925 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 22:52:42.714199   57925 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726181562.692289590
	
	I0912 22:52:42.714230   57925 fix.go:216] guest clock: 1726181562.692289590
	I0912 22:52:42.714247   57925 fix.go:229] Guest: 2024-09-12 22:52:42.69228959 +0000 UTC Remote: 2024-09-12 22:52:42.601847989 +0000 UTC m=+36.339825032 (delta=90.441601ms)
	I0912 22:52:42.714299   57925 fix.go:200] guest clock delta is within tolerance: 90.441601ms
	I0912 22:52:42.714308   57925 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 25.035923878s
	I0912 22:52:42.714347   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:42.714611   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 22:52:42.717869   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.718359   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.718403   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.718557   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:42.719010   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:42.719200   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:52:42.719334   57925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:52:42.719382   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.719511   57925 ssh_runner.go:195] Run: cat /version.json
	I0912 22:52:42.719542   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 22:52:42.722384   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.722750   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.722780   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.722817   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.722925   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.723131   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.723251   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:42.723274   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.723275   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:42.723439   57925 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 22:52:42.723481   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 22:52:42.723626   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 22:52:42.723775   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 22:52:42.723941   57925 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 22:52:42.802884   57925 ssh_runner.go:195] Run: systemctl --version
	I0912 22:52:42.834504   57925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:52:43.001104   57925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 22:52:43.007658   57925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 22:52:43.007755   57925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:52:43.024995   57925 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 22:52:43.025023   57925 start.go:495] detecting cgroup driver to use...
	I0912 22:52:43.025093   57925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:52:43.040884   57925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:52:43.054630   57925 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:52:43.054700   57925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:52:43.068462   57925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:52:43.082436   57925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:52:43.195163   57925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:52:43.353255   57925 docker.go:233] disabling docker service ...
	I0912 22:52:43.353330   57925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:52:43.367709   57925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:52:43.380773   57925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:52:43.507325   57925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:52:43.632893   57925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:52:43.647145   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:52:43.668829   57925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 22:52:43.668883   57925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:43.680339   57925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:52:43.680417   57925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:43.690346   57925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:43.700558   57925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:52:43.710662   57925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:52:43.721063   57925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:52:43.731299   57925 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 22:52:43.731360   57925 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 22:52:43.745285   57925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:52:43.755169   57925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:52:43.881840   57925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:52:43.984142   57925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:52:43.984211   57925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:52:43.988832   57925 start.go:563] Will wait 60s for crictl version
	I0912 22:52:43.988891   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:43.992662   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:52:44.032984   57925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 22:52:44.033089   57925 ssh_runner.go:195] Run: crio --version
	I0912 22:52:44.061151   57925 ssh_runner.go:195] Run: crio --version
	I0912 22:52:44.095536   57925 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 22:52:44.097047   57925 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 22:52:44.100829   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:44.101363   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-12 23:52:31 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 22:52:44.101397   57925 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 22:52:44.101747   57925 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 22:52:44.105914   57925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:52:44.118699   57925 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:52:44.118852   57925 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 22:52:44.118922   57925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:52:44.154174   57925 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 22:52:44.154238   57925 ssh_runner.go:195] Run: which lz4
	I0912 22:52:44.158324   57925 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 22:52:44.162751   57925 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 22:52:44.162789   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 22:52:45.762998   57925 crio.go:462] duration metric: took 1.604700507s to copy over tarball
	I0912 22:52:45.763107   57925 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 22:52:48.390767   57925 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.627623654s)
	I0912 22:52:48.390799   57925 crio.go:469] duration metric: took 2.627754027s to extract the tarball
	I0912 22:52:48.390809   57925 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 22:52:48.432225   57925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:52:48.484732   57925 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 22:52:48.484763   57925 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 22:52:48.484843   57925 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:52:48.484866   57925 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:48.484880   57925 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:48.484874   57925 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:48.484894   57925 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 22:52:48.484897   57925 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:48.484914   57925 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 22:52:48.484846   57925 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:48.486497   57925 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:48.486539   57925 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:48.486503   57925 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:48.486515   57925 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:52:48.486515   57925 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:48.486583   57925 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 22:52:48.486508   57925 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 22:52:48.486502   57925 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:48.727736   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 22:52:48.733408   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:48.748176   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 22:52:48.760674   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:48.767426   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:48.783597   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:48.803128   57925 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 22:52:48.803209   57925 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 22:52:48.803212   57925 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 22:52:48.803244   57925 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:48.803268   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.803302   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.857696   57925 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 22:52:48.857745   57925 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 22:52:48.857770   57925 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 22:52:48.857793   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.857799   57925 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:48.857837   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.865027   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:48.901741   57925 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 22:52:48.901795   57925 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:48.901816   57925 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 22:52:48.901851   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.901859   57925 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:48.901889   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:48.901934   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:52:48.901891   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:48.901946   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:48.901976   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:52:48.934007   57925 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 22:52:48.934056   57925 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:48.934102   57925 ssh_runner.go:195] Run: which crictl
	I0912 22:52:49.006500   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:52:49.016679   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:49.016735   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:49.016797   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:49.016824   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:52:49.016846   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:49.016886   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:49.133314   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 22:52:49.162065   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 22:52:49.162163   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 22:52:49.162213   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:49.162163   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:49.162277   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:49.170178   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 22:52:49.261688   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 22:52:49.317713   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 22:52:49.317713   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 22:52:49.317828   57925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 22:52:49.317975   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 22:52:49.318029   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 22:52:49.318064   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 22:52:49.381042   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 22:52:49.381133   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 22:52:49.386668   57925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 22:52:49.550806   57925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:52:49.694417   57925 cache_images.go:92] duration metric: took 1.209639188s to LoadCachedImages
	W0912 22:52:49.694512   57925 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0912 22:52:49.694530   57925 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 22:52:49.694647   57925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:52:49.694720   57925 ssh_runner.go:195] Run: crio config
	I0912 22:52:49.759672   57925 cni.go:84] Creating CNI manager for ""
	I0912 22:52:49.759699   57925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:52:49.759711   57925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:52:49.759727   57925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 22:52:49.759843   57925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:52:49.759901   57925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 22:52:49.770518   57925 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:52:49.770592   57925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:52:49.780437   57925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 22:52:49.801132   57925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:52:49.819109   57925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 22:52:49.838382   57925 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 22:52:49.842687   57925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:52:49.855169   57925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:52:49.984253   57925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:52:50.001705   57925 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 22:52:50.001732   57925 certs.go:194] generating shared ca certs ...
	I0912 22:52:50.001753   57925 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.001919   57925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 22:52:50.001989   57925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 22:52:50.002004   57925 certs.go:256] generating profile certs ...
	I0912 22:52:50.002066   57925 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 22:52:50.002092   57925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt with IP's: []
	I0912 22:52:50.223450   57925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt ...
	I0912 22:52:50.223489   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: {Name:mk31c734b731787e02b2f625a25fa30bc15752e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.223699   57925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key ...
	I0912 22:52:50.223717   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key: {Name:mkd5733bf65606e06bfb6a398f55d243a90257ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.223826   57925 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 22:52:50.223844   57925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt.fcb0a37b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.69]
	I0912 22:52:50.612862   57925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt.fcb0a37b ...
	I0912 22:52:50.612900   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt.fcb0a37b: {Name:mk63ffb3c9cb311cad012560539dfd8f371bd937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.613066   57925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b ...
	I0912 22:52:50.613079   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b: {Name:mk61aef47f3b68d26b7a7a5383b93bc5b73f7dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.613147   57925 certs.go:381] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt.fcb0a37b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt
	I0912 22:52:50.613215   57925 certs.go:385] copying /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b -> /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key
	I0912 22:52:50.613313   57925 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 22:52:50.613329   57925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt with IP's: []
	I0912 22:52:50.864145   57925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt ...
	I0912 22:52:50.864177   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt: {Name:mk122833def54c6ef7e81fff7a1c3f02122d6d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.864382   57925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key ...
	I0912 22:52:50.864400   57925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key: {Name:mkca325919049bdf85bb55f233979a3db32d858b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:52:50.864602   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 22:52:50.864641   57925 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 22:52:50.864651   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:52:50.864673   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:52:50.864698   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:52:50.864722   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 22:52:50.864762   57925 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 22:52:50.865360   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:52:50.893877   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:52:50.922378   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:52:50.952003   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:52:50.979936   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 22:52:51.007097   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 22:52:51.035161   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:52:51.058619   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:52:51.086262   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 22:52:51.110793   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 22:52:51.135491   57925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:52:51.157386   57925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:52:51.174015   57925 ssh_runner.go:195] Run: openssl version
	I0912 22:52:51.179807   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 22:52:51.190310   57925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 22:52:51.194595   57925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 22:52:51.194658   57925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 22:52:51.200321   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:52:51.212591   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:52:51.223315   57925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:51.228023   57925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:51.228081   57925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:52:51.233787   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:52:51.245469   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 22:52:51.257163   57925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 22:52:51.262432   57925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 22:52:51.262508   57925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 22:52:51.270333   57925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 22:52:51.281802   57925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:52:51.286063   57925 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 22:52:51.286133   57925 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:52:51.286222   57925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:52:51.286282   57925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:52:51.324066   57925 cri.go:89] found id: ""
	I0912 22:52:51.324138   57925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:52:51.334865   57925 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:52:51.345742   57925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:52:51.355408   57925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:52:51.355439   57925 kubeadm.go:157] found existing configuration files:
	
	I0912 22:52:51.355495   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:52:51.364825   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 22:52:51.364888   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 22:52:51.374812   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:52:51.386940   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 22:52:51.387011   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 22:52:51.398427   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:52:51.409015   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 22:52:51.409081   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:52:51.419665   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:52:51.429587   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 22:52:51.429663   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:52:51.439203   57925 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 22:52:51.723437   57925 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:54:49.156343   57925 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 22:54:49.156472   57925 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 22:54:49.157955   57925 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 22:54:49.157998   57925 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 22:54:49.158075   57925 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:54:49.158179   57925 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:54:49.158293   57925 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:54:49.158404   57925 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:54:49.160510   57925 out.go:235]   - Generating certificates and keys ...
	I0912 22:54:49.160609   57925 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 22:54:49.160669   57925 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 22:54:49.160726   57925 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:54:49.160783   57925 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:54:49.160834   57925 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:54:49.160880   57925 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 22:54:49.160936   57925 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 22:54:49.161065   57925 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	I0912 22:54:49.161147   57925 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 22:54:49.161342   57925 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	I0912 22:54:49.161410   57925 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:54:49.161464   57925 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:54:49.161505   57925 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 22:54:49.161552   57925 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:54:49.161598   57925 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:54:49.161671   57925 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:54:49.161726   57925 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:54:49.161786   57925 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:54:49.161895   57925 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:54:49.161972   57925 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:54:49.162011   57925 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 22:54:49.162082   57925 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:54:49.163775   57925 out.go:235]   - Booting up control plane ...
	I0912 22:54:49.163870   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:54:49.163935   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:54:49.163997   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:54:49.164068   57925 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:54:49.164196   57925 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:54:49.164257   57925 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 22:54:49.164325   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:54:49.164487   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:54:49.164555   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:54:49.164710   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:54:49.164767   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:54:49.164936   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:54:49.165003   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:54:49.165164   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:54:49.165227   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:54:49.165403   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:54:49.165414   57925 kubeadm.go:310] 
	I0912 22:54:49.165447   57925 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 22:54:49.165485   57925 kubeadm.go:310] 		timed out waiting for the condition
	I0912 22:54:49.165492   57925 kubeadm.go:310] 
	I0912 22:54:49.165521   57925 kubeadm.go:310] 	This error is likely caused by:
	I0912 22:54:49.165553   57925 kubeadm.go:310] 		- The kubelet is not running
	I0912 22:54:49.165673   57925 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 22:54:49.165685   57925 kubeadm.go:310] 
	I0912 22:54:49.165777   57925 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 22:54:49.165817   57925 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 22:54:49.165845   57925 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 22:54:49.165851   57925 kubeadm.go:310] 
	I0912 22:54:49.165944   57925 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 22:54:49.166020   57925 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 22:54:49.166030   57925 kubeadm.go:310] 
	I0912 22:54:49.166118   57925 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 22:54:49.166208   57925 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 22:54:49.166310   57925 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 22:54:49.166374   57925 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 22:54:49.166387   57925 kubeadm.go:310] 
	W0912 22:54:49.166478   57925 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-642238] and IPs [192.168.61.69 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 22:54:49.166520   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 22:54:49.627770   57925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:54:49.641878   57925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:54:49.651201   57925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:54:49.651223   57925 kubeadm.go:157] found existing configuration files:
	
	I0912 22:54:49.651267   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:54:49.660718   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 22:54:49.660771   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 22:54:49.670023   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:54:49.678763   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 22:54:49.678832   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 22:54:49.687795   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:54:49.696630   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 22:54:49.696694   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:54:49.705711   57925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:54:49.714433   57925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 22:54:49.714498   57925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:54:49.723733   57925 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 22:54:49.787655   57925 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 22:54:49.787706   57925 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 22:54:49.940389   57925 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:54:49.940538   57925 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:54:49.940629   57925 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:54:50.114662   57925 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:54:50.116668   57925 out.go:235]   - Generating certificates and keys ...
	I0912 22:54:50.116770   57925 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 22:54:50.116854   57925 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 22:54:50.116984   57925 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 22:54:50.117096   57925 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 22:54:50.117211   57925 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 22:54:50.117281   57925 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 22:54:50.117370   57925 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 22:54:50.117462   57925 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 22:54:50.117861   57925 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 22:54:50.118231   57925 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 22:54:50.118362   57925 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 22:54:50.118435   57925 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:54:50.205266   57925 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:54:50.322921   57925 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:54:50.467411   57925 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:54:50.607679   57925 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:54:50.627191   57925 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:54:50.628251   57925 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:54:50.628314   57925 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 22:54:50.755988   57925 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:54:50.758452   57925 out.go:235]   - Booting up control plane ...
	I0912 22:54:50.758596   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:54:50.762456   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:54:50.764606   57925 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:54:50.768205   57925 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:54:50.769853   57925 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:55:30.772407   57925 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 22:55:30.772750   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:55:30.772969   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:55:35.773457   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:55:35.773736   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:55:45.774052   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:55:45.774277   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:56:05.773158   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:56:05.773388   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:56:45.772772   57925 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 22:56:45.773049   57925 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 22:56:45.773076   57925 kubeadm.go:310] 
	I0912 22:56:45.773147   57925 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 22:56:45.773202   57925 kubeadm.go:310] 		timed out waiting for the condition
	I0912 22:56:45.773213   57925 kubeadm.go:310] 
	I0912 22:56:45.773260   57925 kubeadm.go:310] 	This error is likely caused by:
	I0912 22:56:45.773303   57925 kubeadm.go:310] 		- The kubelet is not running
	I0912 22:56:45.773456   57925 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 22:56:45.773466   57925 kubeadm.go:310] 
	I0912 22:56:45.773650   57925 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 22:56:45.773703   57925 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 22:56:45.773752   57925 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 22:56:45.773762   57925 kubeadm.go:310] 
	I0912 22:56:45.773907   57925 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 22:56:45.774026   57925 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 22:56:45.774036   57925 kubeadm.go:310] 
	I0912 22:56:45.774201   57925 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 22:56:45.774335   57925 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 22:56:45.774462   57925 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 22:56:45.774583   57925 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 22:56:45.774599   57925 kubeadm.go:310] 
	I0912 22:56:45.774971   57925 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:56:45.775085   57925 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 22:56:45.775170   57925 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 22:56:45.775236   57925 kubeadm.go:394] duration metric: took 3m54.489105968s to StartCluster
	I0912 22:56:45.775308   57925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:56:45.775376   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:56:45.818524   57925 cri.go:89] found id: ""
	I0912 22:56:45.818550   57925 logs.go:276] 0 containers: []
	W0912 22:56:45.818559   57925 logs.go:278] No container was found matching "kube-apiserver"
	I0912 22:56:45.818570   57925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:56:45.818623   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:56:45.855101   57925 cri.go:89] found id: ""
	I0912 22:56:45.855130   57925 logs.go:276] 0 containers: []
	W0912 22:56:45.855139   57925 logs.go:278] No container was found matching "etcd"
	I0912 22:56:45.855144   57925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:56:45.855197   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:56:45.892831   57925 cri.go:89] found id: ""
	I0912 22:56:45.892856   57925 logs.go:276] 0 containers: []
	W0912 22:56:45.892863   57925 logs.go:278] No container was found matching "coredns"
	I0912 22:56:45.892868   57925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:56:45.892914   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:56:45.932581   57925 cri.go:89] found id: ""
	I0912 22:56:45.932610   57925 logs.go:276] 0 containers: []
	W0912 22:56:45.932624   57925 logs.go:278] No container was found matching "kube-scheduler"
	I0912 22:56:45.932633   57925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:56:45.932698   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:56:45.971246   57925 cri.go:89] found id: ""
	I0912 22:56:45.971276   57925 logs.go:276] 0 containers: []
	W0912 22:56:45.971287   57925 logs.go:278] No container was found matching "kube-proxy"
	I0912 22:56:45.971295   57925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:56:45.971356   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:56:46.006597   57925 cri.go:89] found id: ""
	I0912 22:56:46.006627   57925 logs.go:276] 0 containers: []
	W0912 22:56:46.006635   57925 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 22:56:46.006641   57925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:56:46.006698   57925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:56:46.043794   57925 cri.go:89] found id: ""
	I0912 22:56:46.043825   57925 logs.go:276] 0 containers: []
	W0912 22:56:46.043832   57925 logs.go:278] No container was found matching "kindnet"
	I0912 22:56:46.043841   57925 logs.go:123] Gathering logs for kubelet ...
	I0912 22:56:46.043852   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:56:46.093930   57925 logs.go:123] Gathering logs for dmesg ...
	I0912 22:56:46.093970   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:56:46.107765   57925 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:56:46.107801   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:56:46.256864   57925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:56:46.256891   57925 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:56:46.256906   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:56:46.368035   57925 logs.go:123] Gathering logs for container status ...
	I0912 22:56:46.368078   57925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 22:56:46.405168   57925 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 22:56:46.405229   57925 out.go:270] * 
	* 
	W0912 22:56:46.405282   57925 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 22:56:46.405299   57925 out.go:270] * 
	* 
	W0912 22:56:46.406526   57925 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:56:46.409926   57925 out.go:201] 
	W0912 22:56:46.410997   57925 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 22:56:46.411040   57925 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 22:56:46.411059   57925 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 22:56:46.412336   57925 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 6 (222.656952ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:46.685541   61555 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-642238" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (280.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-702201 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-702201 --alsologtostderr -v=3: exit status 82 (2m0.858059722s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-702201"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:53:50.592936   59282 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:53:50.593069   59282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:53:50.593080   59282 out.go:358] Setting ErrFile to fd 2...
	I0912 22:53:50.593086   59282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:53:50.593286   59282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:53:50.593521   59282 out.go:352] Setting JSON to false
	I0912 22:53:50.593648   59282 mustload.go:65] Loading cluster: default-k8s-diff-port-702201
	I0912 22:53:50.593965   59282 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:53:50.594050   59282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 22:53:50.594235   59282 mustload.go:65] Loading cluster: default-k8s-diff-port-702201
	I0912 22:53:50.594364   59282 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:53:50.594405   59282 stop.go:39] StopHost: default-k8s-diff-port-702201
	I0912 22:53:50.594873   59282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:53:50.594924   59282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:53:50.609560   59282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0912 22:53:50.610067   59282 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:53:50.610616   59282 main.go:141] libmachine: Using API Version  1
	I0912 22:53:50.610639   59282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:53:50.610962   59282 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:53:50.613123   59282 out.go:177] * Stopping node "default-k8s-diff-port-702201"  ...
	I0912 22:53:50.614502   59282 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:53:50.614560   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 22:53:50.614812   59282 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:53:50.614843   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 22:53:50.617905   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 22:53:50.618296   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-12 23:52:57 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 22:53:50.618327   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 22:53:50.618501   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 22:53:50.618693   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 22:53:50.618864   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 22:53:50.619085   59282 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 22:53:50.708690   59282 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:53:50.763220   59282 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:53:50.818079   59282 main.go:141] libmachine: Stopping "default-k8s-diff-port-702201"...
	I0912 22:53:50.818108   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 22:53:50.819941   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Stop
	I0912 22:53:50.824408   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 0/120
	I0912 22:53:51.825984   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 1/120
	I0912 22:53:52.828218   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 2/120
	I0912 22:53:53.829704   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 3/120
	I0912 22:53:54.831128   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 4/120
	I0912 22:53:55.833406   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 5/120
	I0912 22:53:56.835112   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 6/120
	I0912 22:53:57.836554   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 7/120
	I0912 22:53:58.838047   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 8/120
	I0912 22:53:59.840522   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 9/120
	I0912 22:54:00.841890   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 10/120
	I0912 22:54:01.843511   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 11/120
	I0912 22:54:02.844975   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 12/120
	I0912 22:54:03.846184   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 13/120
	I0912 22:54:04.847600   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 14/120
	I0912 22:54:05.850248   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 15/120
	I0912 22:54:06.853080   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 16/120
	I0912 22:54:07.854434   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 17/120
	I0912 22:54:08.855937   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 18/120
	I0912 22:54:09.857425   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 19/120
	I0912 22:54:10.859104   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 20/120
	I0912 22:54:11.860500   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 21/120
	I0912 22:54:12.861802   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 22/120
	I0912 22:54:13.863906   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 23/120
	I0912 22:54:14.865484   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 24/120
	I0912 22:54:15.867430   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 25/120
	I0912 22:54:16.868810   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 26/120
	I0912 22:54:17.870253   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 27/120
	I0912 22:54:18.872036   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 28/120
	I0912 22:54:19.873603   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 29/120
	I0912 22:54:20.875817   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 30/120
	I0912 22:54:21.877304   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 31/120
	I0912 22:54:22.878963   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 32/120
	I0912 22:54:23.880253   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 33/120
	I0912 22:54:24.882342   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 34/120
	I0912 22:54:25.884401   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 35/120
	I0912 22:54:26.885716   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 36/120
	I0912 22:54:27.887509   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 37/120
	I0912 22:54:28.888772   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 38/120
	I0912 22:54:30.264710   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 39/120
	I0912 22:54:31.266378   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 40/120
	I0912 22:54:32.268202   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 41/120
	I0912 22:54:33.269635   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 42/120
	I0912 22:54:34.271117   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 43/120
	I0912 22:54:35.272519   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 44/120
	I0912 22:54:36.274577   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 45/120
	I0912 22:54:37.275982   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 46/120
	I0912 22:54:38.277471   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 47/120
	I0912 22:54:39.278865   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 48/120
	I0912 22:54:40.280377   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 49/120
	I0912 22:54:41.282426   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 50/120
	I0912 22:54:42.284046   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 51/120
	I0912 22:54:43.285575   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 52/120
	I0912 22:54:44.287135   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 53/120
	I0912 22:54:45.289087   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 54/120
	I0912 22:54:46.291315   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 55/120
	I0912 22:54:47.292788   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 56/120
	I0912 22:54:48.294324   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 57/120
	I0912 22:54:49.295769   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 58/120
	I0912 22:54:50.297072   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 59/120
	I0912 22:54:51.299677   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 60/120
	I0912 22:54:52.300902   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 61/120
	I0912 22:54:53.302578   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 62/120
	I0912 22:54:54.303910   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 63/120
	I0912 22:54:55.306177   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 64/120
	I0912 22:54:56.308267   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 65/120
	I0912 22:54:57.310081   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 66/120
	I0912 22:54:58.311664   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 67/120
	I0912 22:54:59.313292   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 68/120
	I0912 22:55:00.314643   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 69/120
	I0912 22:55:01.317152   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 70/120
	I0912 22:55:02.318721   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 71/120
	I0912 22:55:03.320070   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 72/120
	I0912 22:55:04.321605   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 73/120
	I0912 22:55:05.323131   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 74/120
	I0912 22:55:06.325201   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 75/120
	I0912 22:55:07.326584   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 76/120
	I0912 22:55:08.328134   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 77/120
	I0912 22:55:09.329314   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 78/120
	I0912 22:55:10.331049   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 79/120
	I0912 22:55:11.333324   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 80/120
	I0912 22:55:12.335073   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 81/120
	I0912 22:55:13.337389   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 82/120
	I0912 22:55:14.338796   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 83/120
	I0912 22:55:15.340388   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 84/120
	I0912 22:55:16.342532   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 85/120
	I0912 22:55:17.343898   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 86/120
	I0912 22:55:18.345652   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 87/120
	I0912 22:55:19.347179   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 88/120
	I0912 22:55:20.348556   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 89/120
	I0912 22:55:21.350851   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 90/120
	I0912 22:55:22.352389   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 91/120
	I0912 22:55:23.353894   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 92/120
	I0912 22:55:24.355396   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 93/120
	I0912 22:55:25.356811   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 94/120
	I0912 22:55:26.358795   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 95/120
	I0912 22:55:27.360757   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 96/120
	I0912 22:55:28.362331   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 97/120
	I0912 22:55:29.363921   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 98/120
	I0912 22:55:30.366115   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 99/120
	I0912 22:55:31.368350   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 100/120
	I0912 22:55:32.369911   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 101/120
	I0912 22:55:33.371461   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 102/120
	I0912 22:55:34.372871   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 103/120
	I0912 22:55:35.374391   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 104/120
	I0912 22:55:36.376377   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 105/120
	I0912 22:55:37.377651   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 106/120
	I0912 22:55:38.378921   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 107/120
	I0912 22:55:39.380586   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 108/120
	I0912 22:55:40.381842   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 109/120
	I0912 22:55:41.384267   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 110/120
	I0912 22:55:42.386278   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 111/120
	I0912 22:55:43.388227   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 112/120
	I0912 22:55:44.389829   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 113/120
	I0912 22:55:45.391306   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 114/120
	I0912 22:55:46.393411   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 115/120
	I0912 22:55:47.394765   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 116/120
	I0912 22:55:48.396048   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 117/120
	I0912 22:55:49.397770   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 118/120
	I0912 22:55:50.399259   59282 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for machine to stop 119/120
	I0912 22:55:51.399835   59282 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:55:51.399884   59282 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0912 22:55:51.401510   59282 out.go:201] 
	W0912 22:55:51.402602   59282 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0912 22:55:51.402618   59282 out.go:270] * 
	* 
	W0912 22:55:51.405130   59282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:55:51.407010   59282 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-702201 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201: exit status 3 (18.442899652s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:09.850035   60488 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	E0912 22:56:09.850055   60488 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-702201" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-378112 --alsologtostderr -v=3
E0912 22:55:05.703588   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-378112 --alsologtostderr -v=3: exit status 82 (2m0.596694211s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-378112"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:54:44.168614   59889 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:54:44.168755   59889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:54:44.168766   59889 out.go:358] Setting ErrFile to fd 2...
	I0912 22:54:44.168772   59889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:54:44.168982   59889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:54:44.169239   59889 out.go:352] Setting JSON to false
	I0912 22:54:44.169338   59889 mustload.go:65] Loading cluster: embed-certs-378112
	I0912 22:54:44.169707   59889 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:54:44.169802   59889 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 22:54:44.169990   59889 mustload.go:65] Loading cluster: embed-certs-378112
	I0912 22:54:44.170188   59889 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:54:44.170249   59889 stop.go:39] StopHost: embed-certs-378112
	I0912 22:54:44.170707   59889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:54:44.170765   59889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:54:44.186636   59889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0912 22:54:44.187214   59889 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:54:44.187908   59889 main.go:141] libmachine: Using API Version  1
	I0912 22:54:44.187944   59889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:54:44.188547   59889 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:54:44.191572   59889 out.go:177] * Stopping node "embed-certs-378112"  ...
	I0912 22:54:44.193050   59889 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:54:44.193111   59889 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 22:54:44.193488   59889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:54:44.193526   59889 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 22:54:44.197292   59889 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 22:54:44.197784   59889 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 22:54:44.197826   59889 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 22:54:44.198118   59889 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 22:54:44.198435   59889 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 22:54:44.198652   59889 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 22:54:44.198809   59889 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 22:54:44.319843   59889 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:54:44.378540   59889 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:54:44.439938   59889 main.go:141] libmachine: Stopping "embed-certs-378112"...
	I0912 22:54:44.439977   59889 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 22:54:44.441950   59889 main.go:141] libmachine: (embed-certs-378112) Calling .Stop
	I0912 22:54:44.445472   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 0/120
	I0912 22:54:45.446970   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 1/120
	I0912 22:54:46.448427   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 2/120
	I0912 22:54:47.449947   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 3/120
	I0912 22:54:48.451461   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 4/120
	I0912 22:54:49.453807   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 5/120
	I0912 22:54:50.456320   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 6/120
	I0912 22:54:51.457950   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 7/120
	I0912 22:54:52.459194   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 8/120
	I0912 22:54:53.460581   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 9/120
	I0912 22:54:54.462043   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 10/120
	I0912 22:54:55.464329   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 11/120
	I0912 22:54:56.466187   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 12/120
	I0912 22:54:57.467744   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 13/120
	I0912 22:54:58.468934   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 14/120
	I0912 22:54:59.471112   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 15/120
	I0912 22:55:00.472738   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 16/120
	I0912 22:55:01.474196   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 17/120
	I0912 22:55:02.476460   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 18/120
	I0912 22:55:03.478676   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 19/120
	I0912 22:55:04.480946   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 20/120
	I0912 22:55:05.483015   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 21/120
	I0912 22:55:06.485208   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 22/120
	I0912 22:55:07.486663   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 23/120
	I0912 22:55:08.488267   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 24/120
	I0912 22:55:09.489691   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 25/120
	I0912 22:55:10.491199   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 26/120
	I0912 22:55:11.492665   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 27/120
	I0912 22:55:12.494091   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 28/120
	I0912 22:55:13.496178   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 29/120
	I0912 22:55:14.498273   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 30/120
	I0912 22:55:15.500078   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 31/120
	I0912 22:55:16.503020   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 32/120
	I0912 22:55:17.504526   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 33/120
	I0912 22:55:18.506103   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 34/120
	I0912 22:55:19.508339   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 35/120
	I0912 22:55:20.509892   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 36/120
	I0912 22:55:21.512344   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 37/120
	I0912 22:55:22.513893   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 38/120
	I0912 22:55:23.516465   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 39/120
	I0912 22:55:24.519113   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 40/120
	I0912 22:55:25.521520   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 41/120
	I0912 22:55:26.522887   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 42/120
	I0912 22:55:27.524584   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 43/120
	I0912 22:55:28.526100   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 44/120
	I0912 22:55:29.528208   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 45/120
	I0912 22:55:30.529731   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 46/120
	I0912 22:55:31.531159   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 47/120
	I0912 22:55:32.532649   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 48/120
	I0912 22:55:33.534074   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 49/120
	I0912 22:55:34.535946   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 50/120
	I0912 22:55:35.537471   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 51/120
	I0912 22:55:36.538811   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 52/120
	I0912 22:55:37.540356   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 53/120
	I0912 22:55:38.541506   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 54/120
	I0912 22:55:39.543593   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 55/120
	I0912 22:55:40.545245   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 56/120
	I0912 22:55:41.547032   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 57/120
	I0912 22:55:42.549033   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 58/120
	I0912 22:55:43.550750   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 59/120
	I0912 22:55:44.552802   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 60/120
	I0912 22:55:45.554796   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 61/120
	I0912 22:55:46.556469   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 62/120
	I0912 22:55:47.557893   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 63/120
	I0912 22:55:48.560238   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 64/120
	I0912 22:55:49.561959   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 65/120
	I0912 22:55:50.564544   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 66/120
	I0912 22:55:51.565866   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 67/120
	I0912 22:55:52.568352   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 68/120
	I0912 22:55:53.569760   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 69/120
	I0912 22:55:54.572139   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 70/120
	I0912 22:55:55.573514   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 71/120
	I0912 22:55:56.574805   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 72/120
	I0912 22:55:57.576602   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 73/120
	I0912 22:55:58.635511   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 74/120
	I0912 22:55:59.637878   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 75/120
	I0912 22:56:00.640359   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 76/120
	I0912 22:56:01.641942   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 77/120
	I0912 22:56:02.644130   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 78/120
	I0912 22:56:03.645769   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 79/120
	I0912 22:56:04.648186   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 80/120
	I0912 22:56:05.649731   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 81/120
	I0912 22:56:06.651242   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 82/120
	I0912 22:56:07.652928   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 83/120
	I0912 22:56:08.654498   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 84/120
	I0912 22:56:09.656550   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 85/120
	I0912 22:56:10.658245   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 86/120
	I0912 22:56:11.659555   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 87/120
	I0912 22:56:12.661032   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 88/120
	I0912 22:56:13.662954   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 89/120
	I0912 22:56:14.665063   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 90/120
	I0912 22:56:15.666768   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 91/120
	I0912 22:56:16.668928   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 92/120
	I0912 22:56:17.670303   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 93/120
	I0912 22:56:18.671733   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 94/120
	I0912 22:56:19.673102   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 95/120
	I0912 22:56:20.674517   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 96/120
	I0912 22:56:21.675979   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 97/120
	I0912 22:56:22.677423   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 98/120
	I0912 22:56:23.679243   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 99/120
	I0912 22:56:24.681207   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 100/120
	I0912 22:56:25.682789   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 101/120
	I0912 22:56:26.684595   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 102/120
	I0912 22:56:27.686865   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 103/120
	I0912 22:56:28.688381   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 104/120
	I0912 22:56:29.690194   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 105/120
	I0912 22:56:30.691626   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 106/120
	I0912 22:56:31.693667   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 107/120
	I0912 22:56:32.695062   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 108/120
	I0912 22:56:33.696644   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 109/120
	I0912 22:56:34.698698   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 110/120
	I0912 22:56:35.700221   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 111/120
	I0912 22:56:36.701805   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 112/120
	I0912 22:56:37.703474   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 113/120
	I0912 22:56:38.704714   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 114/120
	I0912 22:56:39.706584   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 115/120
	I0912 22:56:40.707821   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 116/120
	I0912 22:56:41.709310   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 117/120
	I0912 22:56:42.710842   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 118/120
	I0912 22:56:43.712327   59889 main.go:141] libmachine: (embed-certs-378112) Waiting for machine to stop 119/120
	I0912 22:56:44.712792   59889 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:56:44.712855   59889 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0912 22:56:44.714548   59889 out.go:201] 
	W0912 22:56:44.715914   59889 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0912 22:56:44.715931   59889 out.go:270] * 
	* 
	W0912 22:56:44.718583   59889 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:56:44.719909   59889 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-378112 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112: exit status 3 (18.631452806s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:57:03.353900   61523 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0912 22:57:03.353931   61523 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-378112" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201: exit status 3 (3.166321245s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:13.018028   61223 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	E0912 22:56:13.018051   61223 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-702201 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-702201 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152667136s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-702201 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201: exit status 3 (3.063226914s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:22.233929   61324 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	E0912 22:56:22.233951   61324 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-702201" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-642238 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-642238 create -f testdata/busybox.yaml: exit status 1 (44.14686ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-642238" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-642238 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 6 (266.868113ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:46.993532   61594 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-642238" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 6 (225.371061ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:56:47.221550   61624 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-642238" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-642238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-642238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.330341681s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-642238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-642238 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-642238 describe deploy/metrics-server -n kube-system: exit status 1 (43.315887ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-642238" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-642238 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 6 (214.938156ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:58:15.812827   62256 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-642238" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112: exit status 3 (3.167973195s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:57:06.521919   61779 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0912 22:57:06.521937   61779 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-378112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0912 22:57:07.199127   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-378112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156282165s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-378112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112: exit status 3 (3.059413503s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:57:15.738028   61858 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host
	E0912 22:57:15.738053   61858 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-378112" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-380092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-380092 --alsologtostderr -v=3: exit status 82 (2m0.524673257s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-380092"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:57:50.436365   62141 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:57:50.436630   62141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:57:50.436639   62141 out.go:358] Setting ErrFile to fd 2...
	I0912 22:57:50.436644   62141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:57:50.436829   62141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:57:50.437040   62141 out.go:352] Setting JSON to false
	I0912 22:57:50.437110   62141 mustload.go:65] Loading cluster: no-preload-380092
	I0912 22:57:50.437418   62141 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:57:50.437484   62141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 22:57:50.437679   62141 mustload.go:65] Loading cluster: no-preload-380092
	I0912 22:57:50.437790   62141 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:57:50.437814   62141 stop.go:39] StopHost: no-preload-380092
	I0912 22:57:50.438192   62141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:57:50.438240   62141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:57:50.452834   62141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45167
	I0912 22:57:50.453275   62141 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:57:50.454069   62141 main.go:141] libmachine: Using API Version  1
	I0912 22:57:50.454119   62141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:57:50.454523   62141 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:57:50.456744   62141 out.go:177] * Stopping node "no-preload-380092"  ...
	I0912 22:57:50.458089   62141 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0912 22:57:50.458115   62141 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 22:57:50.458355   62141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0912 22:57:50.458382   62141 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 22:57:50.461184   62141 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 22:57:50.461589   62141 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-12 23:56:13 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 22:57:50.461646   62141 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 22:57:50.461814   62141 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 22:57:50.461988   62141 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 22:57:50.462170   62141 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 22:57:50.462324   62141 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 22:57:50.551290   62141 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0912 22:57:50.625144   62141 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0912 22:57:50.705034   62141 main.go:141] libmachine: Stopping "no-preload-380092"...
	I0912 22:57:50.705058   62141 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 22:57:50.706625   62141 main.go:141] libmachine: (no-preload-380092) Calling .Stop
	I0912 22:57:50.710556   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 0/120
	I0912 22:57:51.711863   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 1/120
	I0912 22:57:52.713360   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 2/120
	I0912 22:57:53.714929   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 3/120
	I0912 22:57:54.716483   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 4/120
	I0912 22:57:55.719076   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 5/120
	I0912 22:57:56.720335   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 6/120
	I0912 22:57:57.721728   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 7/120
	I0912 22:57:58.723107   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 8/120
	I0912 22:57:59.724748   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 9/120
	I0912 22:58:00.726462   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 10/120
	I0912 22:58:01.727887   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 11/120
	I0912 22:58:02.729697   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 12/120
	I0912 22:58:03.731399   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 13/120
	I0912 22:58:04.732897   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 14/120
	I0912 22:58:05.735207   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 15/120
	I0912 22:58:06.736575   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 16/120
	I0912 22:58:07.738226   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 17/120
	I0912 22:58:08.739617   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 18/120
	I0912 22:58:09.741073   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 19/120
	I0912 22:58:10.743396   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 20/120
	I0912 22:58:11.744805   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 21/120
	I0912 22:58:12.746328   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 22/120
	I0912 22:58:13.747825   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 23/120
	I0912 22:58:14.749452   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 24/120
	I0912 22:58:15.751425   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 25/120
	I0912 22:58:16.752805   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 26/120
	I0912 22:58:17.754034   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 27/120
	I0912 22:58:18.756329   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 28/120
	I0912 22:58:19.758006   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 29/120
	I0912 22:58:20.760469   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 30/120
	I0912 22:58:21.762070   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 31/120
	I0912 22:58:22.763552   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 32/120
	I0912 22:58:23.765288   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 33/120
	I0912 22:58:24.766711   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 34/120
	I0912 22:58:25.768783   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 35/120
	I0912 22:58:26.770110   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 36/120
	I0912 22:58:27.771673   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 37/120
	I0912 22:58:28.773124   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 38/120
	I0912 22:58:29.774674   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 39/120
	I0912 22:58:30.776977   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 40/120
	I0912 22:58:31.778474   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 41/120
	I0912 22:58:32.780828   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 42/120
	I0912 22:58:33.782525   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 43/120
	I0912 22:58:34.784025   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 44/120
	I0912 22:58:35.786247   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 45/120
	I0912 22:58:36.787844   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 46/120
	I0912 22:58:37.789194   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 47/120
	I0912 22:58:38.790477   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 48/120
	I0912 22:58:39.792276   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 49/120
	I0912 22:58:40.793531   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 50/120
	I0912 22:58:41.795171   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 51/120
	I0912 22:58:42.796625   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 52/120
	I0912 22:58:43.798178   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 53/120
	I0912 22:58:44.799560   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 54/120
	I0912 22:58:45.801965   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 55/120
	I0912 22:58:46.803362   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 56/120
	I0912 22:58:47.805013   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 57/120
	I0912 22:58:48.806427   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 58/120
	I0912 22:58:49.807892   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 59/120
	I0912 22:58:50.810303   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 60/120
	I0912 22:58:51.811733   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 61/120
	I0912 22:58:52.813191   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 62/120
	I0912 22:58:53.814866   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 63/120
	I0912 22:58:54.816529   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 64/120
	I0912 22:58:55.818806   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 65/120
	I0912 22:58:56.820357   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 66/120
	I0912 22:58:57.821752   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 67/120
	I0912 22:58:58.823143   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 68/120
	I0912 22:58:59.824579   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 69/120
	I0912 22:59:00.826905   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 70/120
	I0912 22:59:01.828381   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 71/120
	I0912 22:59:02.830205   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 72/120
	I0912 22:59:03.831939   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 73/120
	I0912 22:59:04.833493   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 74/120
	I0912 22:59:05.835881   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 75/120
	I0912 22:59:06.837360   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 76/120
	I0912 22:59:07.838912   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 77/120
	I0912 22:59:08.840488   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 78/120
	I0912 22:59:09.841989   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 79/120
	I0912 22:59:10.844263   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 80/120
	I0912 22:59:11.845666   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 81/120
	I0912 22:59:12.847045   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 82/120
	I0912 22:59:13.848471   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 83/120
	I0912 22:59:14.850030   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 84/120
	I0912 22:59:15.852022   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 85/120
	I0912 22:59:16.853463   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 86/120
	I0912 22:59:17.855204   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 87/120
	I0912 22:59:18.856553   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 88/120
	I0912 22:59:19.857978   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 89/120
	I0912 22:59:20.859214   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 90/120
	I0912 22:59:21.860642   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 91/120
	I0912 22:59:22.862278   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 92/120
	I0912 22:59:23.863683   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 93/120
	I0912 22:59:24.865266   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 94/120
	I0912 22:59:25.867675   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 95/120
	I0912 22:59:26.869095   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 96/120
	I0912 22:59:27.870465   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 97/120
	I0912 22:59:28.871863   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 98/120
	I0912 22:59:29.873466   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 99/120
	I0912 22:59:30.875839   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 100/120
	I0912 22:59:31.877681   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 101/120
	I0912 22:59:32.879161   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 102/120
	I0912 22:59:33.880777   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 103/120
	I0912 22:59:34.882292   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 104/120
	I0912 22:59:35.884388   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 105/120
	I0912 22:59:36.885811   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 106/120
	I0912 22:59:37.887479   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 107/120
	I0912 22:59:38.889070   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 108/120
	I0912 22:59:39.890611   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 109/120
	I0912 22:59:40.893108   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 110/120
	I0912 22:59:41.894603   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 111/120
	I0912 22:59:42.896071   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 112/120
	I0912 22:59:43.897388   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 113/120
	I0912 22:59:44.899011   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 114/120
	I0912 22:59:45.901235   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 115/120
	I0912 22:59:46.902759   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 116/120
	I0912 22:59:47.904180   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 117/120
	I0912 22:59:48.905855   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 118/120
	I0912 22:59:49.907194   62141 main.go:141] libmachine: (no-preload-380092) Waiting for machine to stop 119/120
	I0912 22:59:50.908713   62141 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0912 22:59:50.908763   62141 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0912 22:59:50.911545   62141 out.go:201] 
	W0912 22:59:50.913016   62141 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0912 22:59:50.913039   62141 out.go:270] * 
	* 
	W0912 22:59:50.915641   62141 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:59:50.917001   62141 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-380092 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
E0912 23:00:05.704302   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092: exit status 3 (18.547125829s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 23:00:09.465922   62740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host
	E0912 23:00:09.465958   62740 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-380092" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (690.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m26.636056637s)

                                                
                                                
-- stdout --
	* [old-k8s-version-642238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-642238" primary control-plane node in "old-k8s-version-642238" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:58:19.321041   62386 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:58:19.321313   62386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:58:19.321324   62386 out.go:358] Setting ErrFile to fd 2...
	I0912 22:58:19.321330   62386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:58:19.321513   62386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:58:19.322087   62386 out.go:352] Setting JSON to false
	I0912 22:58:19.323004   62386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6041,"bootTime":1726175858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:58:19.323064   62386 start.go:139] virtualization: kvm guest
	I0912 22:58:19.325075   62386 out.go:177] * [old-k8s-version-642238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:58:19.326287   62386 notify.go:220] Checking for updates...
	I0912 22:58:19.326301   62386 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:58:19.327547   62386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:58:19.328935   62386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:58:19.330218   62386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:58:19.331479   62386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:58:19.332462   62386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:58:19.333963   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 22:58:19.334385   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:58:19.334442   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:58:19.349193   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0912 22:58:19.349584   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:58:19.350086   62386 main.go:141] libmachine: Using API Version  1
	I0912 22:58:19.350116   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:58:19.350469   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:58:19.350769   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:58:19.352494   62386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 22:58:19.353721   62386 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:58:19.354002   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:58:19.354043   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:58:19.368460   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0912 22:58:19.368917   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:58:19.369424   62386 main.go:141] libmachine: Using API Version  1
	I0912 22:58:19.369445   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:58:19.369793   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:58:19.369949   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 22:58:19.405284   62386 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 22:58:19.406226   62386 start.go:297] selected driver: kvm2
	I0912 22:58:19.406245   62386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:58:19.406357   62386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:58:19.407037   62386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:58:19.407095   62386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 22:58:19.422378   62386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 22:58:19.422754   62386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:58:19.422820   62386 cni.go:84] Creating CNI manager for ""
	I0912 22:58:19.422833   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 22:58:19.422872   62386 start.go:340] cluster config:
	{Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:58:19.422979   62386 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:58:19.425039   62386 out.go:177] * Starting "old-k8s-version-642238" primary control-plane node in "old-k8s-version-642238" cluster
	I0912 22:58:19.426016   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 22:58:19.426046   62386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0912 22:58:19.426053   62386 cache.go:56] Caching tarball of preloaded images
	I0912 22:58:19.426135   62386 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:58:19.426156   62386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0912 22:58:19.426239   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 22:58:19.426408   62386 start.go:360] acquireMachinesLock for old-k8s-version-642238: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	* 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	* 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-642238 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (222.524257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25: (1.598364031s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.791575179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182587791553307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d966dba5-18bd-47b6-a2cb-28d2eb805719 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.792326958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=246589c0-4fd1-4a20-8725-e573155b762a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.792376054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=246589c0-4fd1-4a20-8725-e573155b762a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.792407079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=246589c0-4fd1-4a20-8725-e573155b762a name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.822914571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18fae5b2-9063-45d0-bd5c-4a9709e976be name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.822981982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18fae5b2-9063-45d0-bd5c-4a9709e976be name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.824060621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ceeecc7-b998-44d9-a099-55a2eab7c0fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.824512599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182587824489791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ceeecc7-b998-44d9-a099-55a2eab7c0fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.825120020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96f63364-8daf-4a07-a5da-137419c2cb9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.825211852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96f63364-8daf-4a07-a5da-137419c2cb9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.825246002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=96f63364-8daf-4a07-a5da-137419c2cb9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.855133958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16a687c5-c394-4af8-bc7e-7c5fb2b28720 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.855256796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16a687c5-c394-4af8-bc7e-7c5fb2b28720 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.856303022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15ec01de-86f4-4e32-b6e3-fdc25b5838fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.856656211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182587856635788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15ec01de-86f4-4e32-b6e3-fdc25b5838fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.857129427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa07e0b9-0645-4829-966f-9cfd13b50ead name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.857254239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa07e0b9-0645-4829-966f-9cfd13b50ead name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.857316301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa07e0b9-0645-4829-966f-9cfd13b50ead name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.888151312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=892f77ae-715c-41f1-a00f-97db10739526 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.888279671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=892f77ae-715c-41f1-a00f-97db10739526 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.889284093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab51834a-35cf-4a99-a5f5-5b04a6ed0e2c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.889632890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182587889613018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab51834a-35cf-4a99-a5f5-5b04a6ed0e2c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.890039048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68451e14-0152-4af3-90d6-a9fd5b7e5bd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.890086896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68451e14-0152-4af3-90d6-a9fd5b7e5bd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:09:47 old-k8s-version-642238 crio[632]: time="2024-09-12 23:09:47.890116191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=68451e14-0152-4af3-90d6-a9fd5b7e5bd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050669] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.881907] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909528] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.539678] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094180] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.073198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070849] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.223496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.134982] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.261562] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.482703] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067645] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.600190] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Sep12 23:02] kauditd_printk_skb: 46 callbacks suppressed
	[Sep12 23:05] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Sep12 23:07] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.064469] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:09:48 up 8 min,  0 users,  load average: 0.08, 0.10, 0.07
	Linux old-k8s-version-642238 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc0001efb60, 0x48ab5d6, 0x3, 0xc00049cd50, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001efb60, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00049cd50, 0x24, 0x0, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net.(*Dialer).DialContext(0xc000be3aa0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00049cd50, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bf3140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00049cd50, 0x24, 0x1000000000060, 0x7f52e1ffdec8, 0x118, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net/http.(*Transport).dial(0xc0007d2000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00049cd50, 0x24, 0x0, 0xc00087e140, 0xc00082d540, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net/http.(*Transport).dialConn(0xc0007d2000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0006dcb40, 0x5, 0xc00049cd50, 0x24, 0x0, 0xc0000c65a0, ...)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: net/http.(*Transport).dialConnFor(0xc0007d2000, 0xc000748160)
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]: created by net/http.(*Transport).queueForDial
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5483]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 12 23:09:45 old-k8s-version-642238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 12 23:09:45 old-k8s-version-642238 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 12 23:09:45 old-k8s-version-642238 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5542]: I0912 23:09:45.940091    5542 server.go:416] Version: v1.20.0
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5542]: I0912 23:09:45.940564    5542 server.go:837] Client rotation is on, will bootstrap in background
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5542]: I0912 23:09:45.942695    5542 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5542]: W0912 23:09:45.943882    5542 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 12 23:09:45 old-k8s-version-642238 kubelet[5542]: I0912 23:09:45.943930    5542 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (226.933245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-642238" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (690.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092: exit status 3 (3.167817922s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 23:00:12.634062   62834 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host
	E0912 23:00:12.634083   62834 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-380092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-380092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153626285s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-380092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092: exit status 3 (3.062215063s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 23:00:21.850113   62898 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host
	E0912 23:00:21.850132   62898 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.253:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-380092" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0912 23:06:28.776512   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:07.199659   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-378112 -n embed-certs-378112
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:14:58.739567848 +0000 UTC m=+6365.587950777
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-378112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-378112 logs -n 25: (2.061298794s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.247518219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182900247490734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b954e82-f304-4f7e-892c-be623117b2d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.248145898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9294660-5f4c-4cc3-a16f-cab280d57436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.248237114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9294660-5f4c-4cc3-a16f-cab280d57436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.248507570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9294660-5f4c-4cc3-a16f-cab280d57436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.285311617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3013461-cc31-46b8-b7e6-0cb5905c10ab name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.285402183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3013461-cc31-46b8-b7e6-0cb5905c10ab name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.286559502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e61040c-9db5-4829-974c-0247869043f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.287032401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182900287004274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e61040c-9db5-4829-974c-0247869043f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.287502153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74061885-6ded-4b46-bd8e-9d6da7534561 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.287570218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74061885-6ded-4b46-bd8e-9d6da7534561 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.287817716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74061885-6ded-4b46-bd8e-9d6da7534561 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.324077246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2e32cb9-5151-434b-82d6-7ad4d1828656 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.324158635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2e32cb9-5151-434b-82d6-7ad4d1828656 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.325219948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12cc9d3a-63d6-40bb-a996-902d5b562ff7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.325804101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182900325780184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12cc9d3a-63d6-40bb-a996-902d5b562ff7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.326297411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7330c2b8-b9fd-494e-94ef-c1755898b932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.326363519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7330c2b8-b9fd-494e-94ef-c1755898b932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.326571390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7330c2b8-b9fd-494e-94ef-c1755898b932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.359369245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=074cee23-2386-493c-ad75-30a79f0c00f1 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.359452558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=074cee23-2386-493c-ad75-30a79f0c00f1 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.360664799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25a06cf0-24d3-42e0-bbe4-fb4b1bd2afd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.361208500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182900361179887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25a06cf0-24d3-42e0-bbe4-fb4b1bd2afd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.361887718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64047629-1d9d-4c11-987f-66cf638b1150 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.361979667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64047629-1d9d-4c11-987f-66cf638b1150 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:15:00 embed-certs-378112 crio[714]: time="2024-09-12 23:15:00.362212573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64047629-1d9d-4c11-987f-66cf638b1150 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e48efc9ba5a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2fb05fcc4e0e9       storage-provisioner
	580f45d8e367e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   01bfe26a78e45       busybox
	7841230606daf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   8f96256aac3db       coredns-7c65d6cfc9-m8t6h
	0b058233860f2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   dbdcc135a5ea5       kube-proxy-fvbbq
	fdb0e5ac691d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2fb05fcc4e0e9       storage-provisioner
	115e1e7911747       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   9bcfe02b74318       kube-apiserver-embed-certs-378112
	dc8c605cca940       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   2aaeb742345d1       kube-scheduler-embed-certs-378112
	e099ac110cb9e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   a42aeaf3e710a       etcd-embed-certs-378112
	54dd60703518d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c884f0f2f98b0       kube-controller-manager-embed-certs-378112
	
	
	==> coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44653 - 22529 "HINFO IN 3919299564452992292.7808051720423804999. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016593259s
	
	
	==> describe nodes <==
	Name:               embed-certs-378112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-378112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=embed-certs-378112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_53_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-378112
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:14:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:12:14 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:12:14 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:12:14 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:12:14 +0000   Thu, 12 Sep 2024 23:01:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.96
	  Hostname:    embed-certs-378112
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9369d9e2546b42da98f24b39f498ebc3
	  System UUID:                9369d9e2-546b-42da-98f2-4b39f498ebc3
	  Boot ID:                    06852740-91cc-48d4-a2c3-758e0899e521
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-m8t6h                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-378112                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-378112             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-378112    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fvbbq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-378112             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-kvpqz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-378112 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-378112 event: Registered Node embed-certs-378112 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-378112 event: Registered Node embed-certs-378112 in Controller
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037907] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752233] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.943136] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.519348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.912279] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060822] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.191597] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.146916] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.299291] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +3.920395] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +1.643387] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.062512] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.515203] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.494025] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +3.325406] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.041834] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] <==
	{"level":"info","ts":"2024-09-12T23:01:30.102663Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:01:30.103229Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T23:01:30.103263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T23:01:30.103545Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.96:2379"}
	{"level":"info","ts":"2024-09-12T23:01:30.104127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:01:30.104875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T23:01:45.439460Z","caller":"traceutil/trace.go:171","msg":"trace[1635517596] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"115.628835ms","start":"2024-09-12T23:01:45.323813Z","end":"2024-09-12T23:01:45.439442Z","steps":["trace[1635517596] 'process raft request'  (duration: 115.347888ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T23:01:46.430050Z","caller":"traceutil/trace.go:171","msg":"trace[517608903] linearizableReadLoop","detail":"{readStateIndex:690; appliedIndex:689; }","duration":"240.032926ms","start":"2024-09-12T23:01:46.189999Z","end":"2024-09-12T23:01:46.430032Z","steps":["trace[517608903] 'read index received'  (duration: 239.829559ms)","trace[517608903] 'applied index is now lower than readState.Index'  (duration: 202.647µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T23:01:46.430255Z","caller":"traceutil/trace.go:171","msg":"trace[1572406179] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"288.053964ms","start":"2024-09-12T23:01:46.142191Z","end":"2024-09-12T23:01:46.430245Z","steps":["trace[1572406179] 'process raft request'  (duration: 287.692576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:46.430441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.425827ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:01:46.430525Z","caller":"traceutil/trace.go:171","msg":"trace[1748289074] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:653; }","duration":"240.539133ms","start":"2024-09-12T23:01:46.189976Z","end":"2024-09-12T23:01:46.430515Z","steps":["trace[1748289074] 'agreement among raft nodes before linearized reading'  (duration: 240.418352ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:47.056708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.83119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888013096436442791 > lease_revoke:<id:1a3391e871484941>","response":"size:27"}
	{"level":"info","ts":"2024-09-12T23:01:47.056782Z","caller":"traceutil/trace.go:171","msg":"trace[1497464155] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:690; }","duration":"381.337182ms","start":"2024-09-12T23:01:46.675432Z","end":"2024-09-12T23:01:47.056769Z","steps":["trace[1497464155] 'read index received'  (duration: 142.187766ms)","trace[1497464155] 'applied index is now lower than readState.Index'  (duration: 239.148278ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T23:01:47.056966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.488695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-378112\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-09-12T23:01:47.056999Z","caller":"traceutil/trace.go:171","msg":"trace[469274308] range","detail":"{range_begin:/registry/minions/embed-certs-378112; range_end:; response_count:1; response_revision:653; }","duration":"381.561469ms","start":"2024-09-12T23:01:46.675428Z","end":"2024-09-12T23:01:47.056989Z","steps":["trace[469274308] 'agreement among raft nodes before linearized reading'  (duration: 381.405277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:47.057026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T23:01:46.675386Z","time spent":"381.6328ms","remote":"127.0.0.1:43478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5508,"request content":"key:\"/registry/minions/embed-certs-378112\" "}
	{"level":"warn","ts":"2024-09-12T23:01:47.057228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.836698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:01:47.057255Z","caller":"traceutil/trace.go:171","msg":"trace[1108297500] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:653; }","duration":"152.864604ms","start":"2024-09-12T23:01:46.904382Z","end":"2024-09-12T23:01:47.057247Z","steps":["trace[1108297500] 'agreement among raft nodes before linearized reading'  (duration: 152.818115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:48.214290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.762963ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888013096436442803 > lease_revoke:<id:1a3391e878612528>","response":"size:27"}
	{"level":"info","ts":"2024-09-12T23:02:26.894868Z","caller":"traceutil/trace.go:171","msg":"trace[2058425542] transaction","detail":"{read_only:false; response_revision:687; number_of_response:1; }","duration":"181.147525ms","start":"2024-09-12T23:02:26.713696Z","end":"2024-09-12T23:02:26.894843Z","steps":["trace[2058425542] 'process raft request'  (duration: 181.033849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:02:27.324793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.309613ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:02:27.324930Z","caller":"traceutil/trace.go:171","msg":"trace[1163119828] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:687; }","duration":"135.499004ms","start":"2024-09-12T23:02:27.189421Z","end":"2024-09-12T23:02:27.324920Z","steps":["trace[1163119828] 'range keys from in-memory index tree'  (duration: 135.253546ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T23:11:30.141259Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2024-09-12T23:11:30.151012Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":897,"took":"9.429136ms","hash":929492379,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2760704,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-12T23:11:30.151081Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":929492379,"revision":897,"compact-revision":-1}
	
	
	==> kernel <==
	 23:15:00 up 13 min,  0 users,  load average: 0.04, 0.14, 0.14
	Linux embed-certs-378112 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] <==
	W0912 23:11:32.417209       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:11:32.417461       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:11:32.418372       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:11:32.419452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:12:32.418678       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:12:32.418945       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0912 23:12:32.419822       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:12:32.420017       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:12:32.420075       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:12:32.422029       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:14:32.420449       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:14:32.420563       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:14:32.421773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:14:32.422943       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:14:32.423024       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:14:32.424171       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] <==
	E0912 23:09:35.054066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:09:35.534937       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:10:05.060061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:10:05.542097       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:10:35.067834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:10:35.550259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:11:05.074276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:05.558868       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:11:35.080773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:35.566175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:12:05.086321       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:05.574184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:12:14.192906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-378112"
	I0912 23:12:34.982160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.467µs"
	E0912 23:12:35.094323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:35.581452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:12:48.981662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="162.875µs"
	E0912 23:13:05.100413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:05.589093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:13:35.107456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:35.596847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:14:05.113179       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:05.605033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:14:35.120150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:35.613806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:01:32.663972       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:01:32.673692       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.96"]
	E0912 23:01:32.673789       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:01:32.702258       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:01:32.702316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:01:32.702339       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:01:32.704505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:01:32.704869       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:01:32.704890       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:01:32.706254       1 config.go:199] "Starting service config controller"
	I0912 23:01:32.706299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:01:32.706327       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:01:32.706345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:01:32.706898       1 config.go:328] "Starting node config controller"
	I0912 23:01:32.706922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:01:32.806375       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:01:32.806436       1 shared_informer.go:320] Caches are synced for service config
	I0912 23:01:32.807146       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] <==
	I0912 23:01:29.698093       1 serving.go:386] Generated self-signed cert in-memory
	W0912 23:01:31.359665       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 23:01:31.359833       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 23:01:31.359863       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 23:01:31.359931       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 23:01:31.429556       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 23:01:31.429716       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:01:31.440887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 23:01:31.441061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 23:01:31.441108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:01:31.441140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 23:01:31.541805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:13:47 embed-certs-378112 kubelet[920]: E0912 23:13:47.122257     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182827122043804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:13:55 embed-certs-378112 kubelet[920]: E0912 23:13:55.965251     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:13:57 embed-certs-378112 kubelet[920]: E0912 23:13:57.123771     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182837123309613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:13:57 embed-certs-378112 kubelet[920]: E0912 23:13:57.123824     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182837123309613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:07 embed-certs-378112 kubelet[920]: E0912 23:14:07.127413     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182847125439882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:07 embed-certs-378112 kubelet[920]: E0912 23:14:07.127770     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182847125439882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:08 embed-certs-378112 kubelet[920]: E0912 23:14:08.965606     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:14:17 embed-certs-378112 kubelet[920]: E0912 23:14:17.131020     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182857128998882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:17 embed-certs-378112 kubelet[920]: E0912 23:14:17.131655     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182857128998882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:22 embed-certs-378112 kubelet[920]: E0912 23:14:22.965148     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:14:26 embed-certs-378112 kubelet[920]: E0912 23:14:26.981358     920 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:14:26 embed-certs-378112 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:14:26 embed-certs-378112 kubelet[920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:14:26 embed-certs-378112 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:14:26 embed-certs-378112 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:14:27 embed-certs-378112 kubelet[920]: E0912 23:14:27.133541     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182867133098238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:27 embed-certs-378112 kubelet[920]: E0912 23:14:27.133564     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182867133098238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:33 embed-certs-378112 kubelet[920]: E0912 23:14:33.968568     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:14:37 embed-certs-378112 kubelet[920]: E0912 23:14:37.136508     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182877136256083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:37 embed-certs-378112 kubelet[920]: E0912 23:14:37.136538     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182877136256083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:46 embed-certs-378112 kubelet[920]: E0912 23:14:46.967220     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:14:47 embed-certs-378112 kubelet[920]: E0912 23:14:47.139072     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182887138559662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:47 embed-certs-378112 kubelet[920]: E0912 23:14:47.139115     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182887138559662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:57 embed-certs-378112 kubelet[920]: E0912 23:14:57.140954     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182897140066697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:14:57 embed-certs-378112 kubelet[920]: E0912 23:14:57.141018     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182897140066697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] <==
	I0912 23:02:03.256714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:02:03.267838       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:02:03.268069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:02:20.669516       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:02:20.669976       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37!
	I0912 23:02:20.670285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a2a2cd0-d331-47b6-b689-eee87ed80181", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37 became leader
	I0912 23:02:20.770495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37!
	
	
	==> storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] <==
	I0912 23:01:32.503844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0912 23:02:02.510510       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-378112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kvpqz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz: exit status 1 (63.290731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kvpqz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:16:25.329011629 +0000 UTC m=+6452.177394558
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-380092 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-380092 logs -n 25: (2.177820899s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.861467840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182986861446860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=943071cd-f70b-4623-a63c-c46bcfa8b930 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.862063768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0b46fda-6d1d-4763-92dc-2905e4c6981f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.862118273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0b46fda-6d1d-4763-92dc-2905e4c6981f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.862354106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0b46fda-6d1d-4763-92dc-2905e4c6981f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.904319178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c32bb124-c605-4613-9bd0-cd089ec9ffca name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.904627001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c32bb124-c605-4613-9bd0-cd089ec9ffca name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.905900744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d293442a-1636-4f8d-9632-31ad3250c0db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.906236413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182986906214657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d293442a-1636-4f8d-9632-31ad3250c0db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.907214223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5c8a3b2-7c0c-4adf-a26e-df18ba7c33b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.907828972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5c8a3b2-7c0c-4adf-a26e-df18ba7c33b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.908350005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5c8a3b2-7c0c-4adf-a26e-df18ba7c33b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.954095090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d4bc0dc-3548-4d0c-bcac-a591751b6523 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.954171683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d4bc0dc-3548-4d0c-bcac-a591751b6523 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.955068147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db9b6e6c-9923-4c73-b119-eb65f7b5a480 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.955411841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182986955370058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db9b6e6c-9923-4c73-b119-eb65f7b5a480 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.956021091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d1a4e8d-5ee1-4422-80ce-e21141b0f4b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.956118887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d1a4e8d-5ee1-4422-80ce-e21141b0f4b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.956679027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d1a4e8d-5ee1-4422-80ce-e21141b0f4b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.994054356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=053a8b1a-3adf-4cb2-9cf6-80fc816b9a5d name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.994185614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=053a8b1a-3adf-4cb2-9cf6-80fc816b9a5d name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.995468974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc41e6dc-4ca5-4532-a19d-1fa1fc0bd762 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.996025507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182986995993519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc41e6dc-4ca5-4532-a19d-1fa1fc0bd762 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.996859847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6521c421-59d9-4098-87a7-3ac55d172311 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.996919055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6521c421-59d9-4098-87a7-3ac55d172311 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:26 no-preload-380092 crio[704]: time="2024-09-12 23:16:26.997142314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6521c421-59d9-4098-87a7-3ac55d172311 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d117ed77ba5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   88a25c57dc565       storage-provisioner
	e47feca4846d9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b3b07d8fb160c       busybox
	e59d289c9afef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   75dad9f554151       coredns-7c65d6cfc9-twck7
	4c48075599101       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   566addd15dd3e       kube-proxy-z4rcx
	d40483dfc6594       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   88a25c57dc565       storage-provisioner
	eb473fa0b2d91       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   2                   bc4e3cf733a3e       kube-controller-manager-no-preload-380092
	35282e97473f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   9b9c63eaf40ef       etcd-no-preload-380092
	3c73944a51041       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            2                   febe058d23f64       kube-apiserver-no-preload-380092
	00f124dff0f77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      14 minutes ago      Exited              kube-apiserver            1                   febe058d23f64       kube-apiserver-no-preload-380092
	635fd2c2a6dd2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      14 minutes ago      Exited              kube-controller-manager   1                   bc4e3cf733a3e       kube-controller-manager-no-preload-380092
	3187fdef2bd31       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      14 minutes ago      Running             kube-scheduler            1                   cde6a78bdfb6c       kube-scheduler-no-preload-380092
	
	
	==> coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35582 - 58395 "HINFO IN 7798790501937056755.3744919700464143285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012134309s
	
	
	==> describe nodes <==
	Name:               no-preload-380092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-380092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=no-preload-380092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_56_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:56:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-380092
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:16:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:13:31 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:13:31 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:13:31 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:13:31 +0000   Thu, 12 Sep 2024 23:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.253
	  Hostname:    no-preload-380092
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b588c397551428f813ee867d317e221
	  System UUID:                0b588c39-7551-428f-813e-e867d317e221
	  Boot ID:                    2c55225c-09f7-400c-8d96-cd46f6eb1084
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-twck7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-no-preload-380092                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kube-apiserver-no-preload-380092             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-380092    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-z4rcx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-380092             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-6867b74b74-4v7f5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-380092 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-380092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-380092 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                19m                kubelet          Node no-preload-380092 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-380092 event: Registered Node no-preload-380092 in Controller
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-380092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-380092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-380092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-380092 event: Registered Node no-preload-380092 in Controller
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052672] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942555] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.806223] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.364826] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.422725] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.057848] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071599] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.216811] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.120267] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.298945] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[Sep12 23:02] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.061075] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.993787] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +4.140445] kauditd_printk_skb: 87 callbacks suppressed
	[Sep12 23:03] systemd-fstab-generator[2082]: Ignoring "noauto" option for root device
	[  +2.373199] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.972776] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] <==
	{"level":"info","ts":"2024-09-12T23:02:50.797069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:02:50.798922Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-12T23:02:50.799130Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d2e91a6b86102115","initial-advertise-peer-urls":["https://192.168.50.253:2380"],"listen-peer-urls":["https://192.168.50.253:2380"],"advertise-client-urls":["https://192.168.50.253:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.253:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T23:02:50.799208Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T23:02:50.799390Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.253:2380"}
	{"level":"info","ts":"2024-09-12T23:02:50.799410Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.253:2380"}
	{"level":"info","ts":"2024-09-12T23:02:52.383930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-12T23:02:52.383988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-12T23:02:52.384027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 received MsgPreVoteResp from d2e91a6b86102115 at term 2"}
	{"level":"info","ts":"2024-09-12T23:02:52.384042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 became candidate at term 3"}
	{"level":"info","ts":"2024-09-12T23:02:52.384048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 received MsgVoteResp from d2e91a6b86102115 at term 3"}
	{"level":"info","ts":"2024-09-12T23:02:52.384066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2e91a6b86102115 became leader at term 3"}
	{"level":"info","ts":"2024-09-12T23:02:52.384074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d2e91a6b86102115 elected leader d2e91a6b86102115 at term 3"}
	{"level":"info","ts":"2024-09-12T23:02:52.386732Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d2e91a6b86102115","local-member-attributes":"{Name:no-preload-380092 ClientURLs:[https://192.168.50.253:2379]}","request-path":"/0/members/d2e91a6b86102115/attributes","cluster-id":"83a31a18a9a6a5be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T23:02:52.386745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:02:52.386869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:02:52.387953Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:02:52.388088Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:02:52.389031Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T23:02:52.389110Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.253:2379"}
	{"level":"info","ts":"2024-09-12T23:02:52.389232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T23:02:52.389254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T23:12:57.255033Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":863}
	{"level":"info","ts":"2024-09-12T23:12:57.264439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":863,"took":"9.087041ms","hash":3567789701,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2756608,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-12T23:12:57.264509Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3567789701,"revision":863,"compact-revision":-1}
	
	
	==> kernel <==
	 23:16:27 up 14 min,  0 users,  load average: 0.03, 0.15, 0.14
	Linux no-preload-380092 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] <==
	I0912 23:02:18.960952       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:02:19.331162       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0912 23:02:19.339708       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:19.340367       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0912 23:02:19.367591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 23:02:19.374745       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0912 23:02:19.376550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0912 23:02:19.376863       1 instance.go:232] Using reconciler: lease
	W0912 23:02:19.379719       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.340789       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.340851       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.380336       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:21.703220       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:21.988583       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:22.058333       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:23.981238       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:24.285985       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:25.038974       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:27.721483       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:29.182817       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:29.185391       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:33.754598       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:34.852824       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:36.339368       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0912 23:02:39.378001       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] <==
	E0912 23:12:59.576252       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0912 23:12:59.576335       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:12:59.577485       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:12:59.577543       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:13:59.578017       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:13:59.578074       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0912 23:13:59.578294       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:13:59.578379       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:13:59.579219       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:13:59.580440       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:15:59.579796       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:15:59.580132       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:15:59.581199       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:15:59.581301       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:15:59.581417       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:15:59.582588       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] <==
	I0912 23:02:19.302013       1 serving.go:386] Generated self-signed cert in-memory
	I0912 23:02:19.806963       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0912 23:02:19.807056       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:02:19.808585       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0912 23:02:19.808653       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 23:02:19.808779       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 23:02:19.808984       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0912 23:02:58.503407       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] <==
	E0912 23:11:02.444285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:02.946016       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:11:32.452191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:32.953936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:12:02.458831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:02.961360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:12:32.466315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:32.969315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:13:02.472802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:02.977770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:13:31.413887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-380092"
	E0912 23:13:32.478234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:32.987475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:14:02.484675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:02.995485       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:14:27.051440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="303.335µs"
	E0912 23:14:32.492484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:33.002905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:14:39.050817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="200.323µs"
	E0912 23:15:02.499300       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:15:03.010381       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:15:32.505453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:15:33.020306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:16:02.512029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:16:03.027951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:03:00.348025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:03:00.379307       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.253"]
	E0912 23:03:00.379468       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:03:00.459620       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:03:00.459714       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:03:00.459749       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:03:00.462665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:03:00.463202       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:03:00.463255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:03:00.487251       1 config.go:199] "Starting service config controller"
	I0912 23:03:00.488361       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:03:00.489041       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:03:00.489877       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:03:00.490144       1 config.go:328] "Starting node config controller"
	I0912 23:03:00.490197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:03:00.590617       1 shared_informer.go:320] Caches are synced for node config
	I0912 23:03:00.590661       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:03:00.590760       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] <==
	W0912 23:02:58.508680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:02:58.521626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 23:02:58.521991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 23:02:58.522143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 23:02:58.522434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.522889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.523644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.523816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 23:02:58.523943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.524259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 23:02:58.524361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 23:02:58.524468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 23:02:58.524500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 23:03:00.099613       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:15:18 no-preload-380092 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:15:18 no-preload-380092 kubelet[1354]: E0912 23:15:18.222478    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182918221998437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:18 no-preload-380092 kubelet[1354]: E0912 23:15:18.222508    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182918221998437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:25 no-preload-380092 kubelet[1354]: E0912 23:15:25.032176    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:15:28 no-preload-380092 kubelet[1354]: E0912 23:15:28.224284    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182928223976605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:28 no-preload-380092 kubelet[1354]: E0912 23:15:28.224317    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182928223976605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:37 no-preload-380092 kubelet[1354]: E0912 23:15:37.032285    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:15:38 no-preload-380092 kubelet[1354]: E0912 23:15:38.226634    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182938226252644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:38 no-preload-380092 kubelet[1354]: E0912 23:15:38.226665    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182938226252644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:48 no-preload-380092 kubelet[1354]: E0912 23:15:48.230714    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182948228254252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:48 no-preload-380092 kubelet[1354]: E0912 23:15:48.231221    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182948228254252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:52 no-preload-380092 kubelet[1354]: E0912 23:15:52.031836    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:15:58 no-preload-380092 kubelet[1354]: E0912 23:15:58.234000    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182958233319209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:58 no-preload-380092 kubelet[1354]: E0912 23:15:58.234026    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182958233319209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:07 no-preload-380092 kubelet[1354]: E0912 23:16:07.032293    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:16:08 no-preload-380092 kubelet[1354]: E0912 23:16:08.235878    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182968235226359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:08 no-preload-380092 kubelet[1354]: E0912 23:16:08.236236    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182968235226359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]: E0912 23:16:18.047444    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]: E0912 23:16:18.242047    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182978241828401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:18 no-preload-380092 kubelet[1354]: E0912 23:16:18.242071    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182978241828401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:20 no-preload-380092 kubelet[1354]: E0912 23:16:20.033711    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	
	
	==> storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] <==
	I0912 23:03:30.442505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:03:30.457600       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:03:30.457808       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:03:47.858919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:03:47.859070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983!
	I0912 23:03:47.864390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3827ca0e-7f06-42b4-b440-3352dbbaadc3", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983 became leader
	I0912 23:03:47.960117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983!
	
	
	==> storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] <==
	I0912 23:02:59.842121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0912 23:03:29.846488       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-380092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4v7f5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5: exit status 1 (63.612215ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4v7f5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:16:34.458475792 +0000 UTC m=+6461.306858718
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-702201 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-702201 logs -n 25: (2.127349362s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.027962292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182996027894943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61d22dd9-0363-4a64-8c2a-d5e3f04f9a25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.028487538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b94ad381-1657-4c82-95c4-7b5fc8e1c83e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.028537527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b94ad381-1657-4c82-95c4-7b5fc8e1c83e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.028803667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b94ad381-1657-4c82-95c4-7b5fc8e1c83e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.068117779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9a649a2-4bae-4ef1-8982-12e046f53a60 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.068203769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9a649a2-4bae-4ef1-8982-12e046f53a60 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.069158684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12d3416d-fc05-4ed9-8ba5-d9f78bedac96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.069623218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182996069523659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12d3416d-fc05-4ed9-8ba5-d9f78bedac96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.070133955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b41b59aa-955e-47fe-9e76-069874aebadd name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.070185113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b41b59aa-955e-47fe-9e76-069874aebadd name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.070429587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b41b59aa-955e-47fe-9e76-069874aebadd name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.105334851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3759ed41-a183-47e7-9fd6-6ad3db19bb5a name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.105405810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3759ed41-a183-47e7-9fd6-6ad3db19bb5a name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.106472504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=589f39cf-ed35-41d7-b848-f8218d7bc173 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.107006224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182996106977606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=589f39cf-ed35-41d7-b848-f8218d7bc173 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.107521731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c7992c9-ec37-4076-a244-8eb9744c3f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.107645091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c7992c9-ec37-4076-a244-8eb9744c3f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.107857277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c7992c9-ec37-4076-a244-8eb9744c3f1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.140018150Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dab1214-a3e3-4e99-835e-7f91dffd89bf name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.140091749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dab1214-a3e3-4e99-835e-7f91dffd89bf name=/runtime.v1.RuntimeService/Version
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.141201542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c28ac41-9449-4a07-8c55-fc01dfea6581 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.141670056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182996141644778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c28ac41-9449-4a07-8c55-fc01dfea6581 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.142227146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=753fc4c8-42b4-48f8-912f-21129788e3ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.142283174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=753fc4c8-42b4-48f8-912f-21129788e3ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:16:36 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:16:36.142540101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=753fc4c8-42b4-48f8-912f-21129788e3ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20706af79dcba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   723f2e0c6feeb       coredns-7c65d6cfc9-f5spz
	9417a075a215d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   cd1f45061a9f4       storage-provisioner
	c97784cdf55f3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ff0416be2d8f6       coredns-7c65d6cfc9-qhbgf
	48f2900449cb2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   bd0f2307e697f       kube-proxy-mv8ws
	8e600ca01711f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   485522c01c095       kube-scheduler-default-k8s-diff-port-702201
	b71cf03e9cdba       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   d7cbf207c6b9c       kube-controller-manager-default-k8s-diff-port-702201
	427d0b9d288b2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   ebc86e8a7cafa       kube-apiserver-default-k8s-diff-port-702201
	9c658afc2ee1f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   c968c24fe11a8       etcd-default-k8s-diff-port-702201
	9585d55eb79b3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   e6ff569f1a42d       kube-apiserver-default-k8s-diff-port-702201
	
	
	==> coredns [20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-702201
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-702201
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=default-k8s-diff-port-702201
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 23:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-702201
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:16:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:12:36 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:12:36 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:12:36 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:12:36 +0000   Thu, 12 Sep 2024 23:07:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    default-k8s-diff-port-702201
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1296c84ac184068bb634b575db84e62
	  System UUID:                d1296c84-ac18-4068-bb63-4b575db84e62
	  Boot ID:                    c844185b-24b6-480f-b865-8643f988a7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-f5spz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-qhbgf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-702201                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-702201             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-702201    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-mv8ws                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-702201             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-w2dvn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-702201 event: Registered Node default-k8s-diff-port-702201 in Controller
	
	
	==> dmesg <==
	[  +0.051308] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.963271] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.998418] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573381] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.083325] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060016] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057841] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.207341] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.151332] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.312200] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.942546] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +1.790725] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.067278] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.541396] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.572445] kauditd_printk_skb: 85 callbacks suppressed
	[Sep12 23:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.582149] systemd-fstab-generator[2517]: Ignoring "noauto" option for root device
	[  +4.383574] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.663713] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +4.901127] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +0.097232] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.959392] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263] <==
	{"level":"info","ts":"2024-09-12T23:07:14.426741Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-12T23:07:14.427313Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-12T23:07:14.427395Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.214:2380"}
	{"level":"info","ts":"2024-09-12T23:07:14.429011Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9910392473c15cf3","initial-advertise-peer-urls":["https://192.168.39.214:2380"],"listen-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T23:07:14.429035Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T23:07:14.583858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.583964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.584050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgPreVoteResp from 9910392473c15cf3 at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.584087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgVoteResp from 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became leader at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9910392473c15cf3 elected leader 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.586226Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9910392473c15cf3","local-member-attributes":"{Name:default-k8s-diff-port-702201 ClientURLs:[https://192.168.39.214:2379]}","request-path":"/0/members/9910392473c15cf3/attributes","cluster-id":"437e955a662fe33","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T23:07:14.586300Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:07:14.586707Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.588367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:07:14.590645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T23:07:14.590678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T23:07:14.591264Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:07:14.592051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.214:2379"}
	{"level":"info","ts":"2024-09-12T23:07:14.597749Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:07:14.607688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.607820Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.607870Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.616597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:16:36 up 14 min,  0 users,  load average: 0.27, 0.27, 0.19
	Linux default-k8s-diff-port-702201 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:12:17.968298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:12:17.968758       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:12:17.968812       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:12:17.970171       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:13:17.969011       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:13:17.969383       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:13:17.970443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:13:17.970529       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:13:17.970614       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:13:17.971795       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:15:17.971247       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:15:17.971753       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0912 23:15:17.972325       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:15:17.972377       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:15:17.973475       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:15:17.973599       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d] <==
	W0912 23:07:07.475746       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.558922       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.559005       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.561349       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.573085       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.575808       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.580299       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.584854       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.589354       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.615853       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.619384       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.620787       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.652393       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.707968       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.773660       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.837176       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.847118       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.877923       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.906027       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.935989       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.117958       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.225496       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.268944       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.282999       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:11.537940       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd] <==
	E0912 23:11:23.973899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:24.403339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:11:53.980267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:11:54.411313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:12:23.987850       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:24.418977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:12:36.282232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-702201"
	E0912 23:12:53.993791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:12:54.427141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:13:04.531725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="319.424µs"
	I0912 23:13:16.531372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="57.099µs"
	E0912 23:13:24.000311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:24.434613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:13:54.006615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:13:54.445291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:14:24.012712       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:24.453217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:14:54.020269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:14:54.460137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:15:24.027578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:15:24.468813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:15:54.033270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:15:54.477924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:16:24.041254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:16:24.486060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:07:26.079769       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:07:26.103404       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0912 23:07:26.103493       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:07:26.407160       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:07:26.407202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:07:26.407229       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:07:26.409868       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:07:26.410257       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:07:26.410334       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:07:26.412278       1 config.go:199] "Starting service config controller"
	I0912 23:07:26.412380       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:07:26.412432       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:07:26.412449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:07:26.413099       1 config.go:328] "Starting node config controller"
	I0912 23:07:26.413145       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:07:26.512604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:07:26.512633       1 shared_informer.go:320] Caches are synced for service config
	I0912 23:07:26.513184       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3] <==
	W0912 23:07:16.978485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 23:07:16.978595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:16.979812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:16.979901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.858080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:17.859051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.919149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 23:07:17.919397       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 23:07:17.919712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 23:07:17.920278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.924180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 23:07:17.924227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.934853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:07:17.934901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.976104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:17.976175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.209923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 23:07:18.209973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.294398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 23:07:18.294456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.309240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:18.309291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.395119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:18.395217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 23:07:20.469428       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:15:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:29.722380    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182929722145972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:29.722405    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182929722145972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:33 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:33.515891    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:15:39 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:39.724426    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182939724092268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:39 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:39.724486    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182939724092268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:45 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:45.515444    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:15:49 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:49.726289    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182949725842443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:49 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:49.726698    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182949725842443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:58 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:58.515430    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:15:59 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:59.727728    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182959727415670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:15:59 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:15:59.727793    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182959727415670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:09.729233    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182969728976587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:09.729312    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182969728976587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:11 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:11.515070    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:19.531036    2848 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:19.730859    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182979730380068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:19.731018    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182979730380068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:22 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:22.516117    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:16:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:29.732979    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182989732388865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:29.733858    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726182989732388865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:16:34 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:16:34.516200    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	
	
	==> storage-provisioner [9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919] <==
	I0912 23:07:26.355481       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:07:26.410634       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:07:26.410704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:07:26.427510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:07:26.427926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6!
	I0912 23:07:26.429603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7d48e53-c995-4c9e-a3c1-270a7c2c2207", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6 became leader
	I0912 23:07:26.528993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w2dvn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn: exit status 1 (65.583239ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w2dvn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:10:05.703519   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:10:10.274697   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:12:07.199815   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:15:05.703940   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:17:07.200078   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (228.499344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-642238" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (230.324465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25: (1.630823095s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.230890053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183131230859327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da5c2124-8baf-4f4b-8bd0-35d51da96d0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.231504483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a43c4f3-2e05-423b-a62f-fa5ef1f7b6b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.231585202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a43c4f3-2e05-423b-a62f-fa5ef1f7b6b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.231624984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3a43c4f3-2e05-423b-a62f-fa5ef1f7b6b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.261575280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcf9e966-4268-4498-83ab-3059bb56e55c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.261671500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcf9e966-4268-4498-83ab-3059bb56e55c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.262760844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a537c28-de81-4a3a-9880-59f8eb2a4bda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.263248363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183131263219513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a537c28-de81-4a3a-9880-59f8eb2a4bda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.263826916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57f02186-b9bb-40cb-9700-05e61b1ae682 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.263887639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57f02186-b9bb-40cb-9700-05e61b1ae682 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.263921732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=57f02186-b9bb-40cb-9700-05e61b1ae682 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.293770698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1b58285-c2cf-42b8-b3e9-006558810f70 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.293838712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1b58285-c2cf-42b8-b3e9-006558810f70 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.295149650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=011961e1-31c2-444f-82d1-c604d5a1e42c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.295623513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183131295586497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=011961e1-31c2-444f-82d1-c604d5a1e42c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.296363282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af4f9844-2df3-4dfa-aef9-f3425fc02f7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.296417437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af4f9844-2df3-4dfa-aef9-f3425fc02f7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.296456211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=af4f9844-2df3-4dfa-aef9-f3425fc02f7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.330921347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1e70068-0ab9-4349-bd5b-aaf670f869df name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.330998634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1e70068-0ab9-4349-bd5b-aaf670f869df name=/runtime.v1.RuntimeService/Version
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.333145202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0460316b-c463-4f27-98d3-f0ef53fe820a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.333580133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183131333558257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0460316b-c463-4f27-98d3-f0ef53fe820a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.334298677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04d92864-e2d8-4bc1-a6da-14dfab82e667 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.334368166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04d92864-e2d8-4bc1-a6da-14dfab82e667 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:18:51 old-k8s-version-642238 crio[632]: time="2024-09-12 23:18:51.334407701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=04d92864-e2d8-4bc1-a6da-14dfab82e667 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050669] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.881907] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909528] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.539678] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094180] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.073198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070849] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.223496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.134982] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.261562] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.482703] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067645] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.600190] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Sep12 23:02] kauditd_printk_skb: 46 callbacks suppressed
	[Sep12 23:05] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Sep12 23:07] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.064469] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:18:51 up 17 min,  0 users,  load average: 0.00, 0.02, 0.03
	Linux old-k8s-version-642238 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0000e04e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0005d1920, 0x24, 0x0, ...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: net.(*Dialer).DialContext(0xc000be51a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0005d1920, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bede00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0005d1920, 0x24, 0x60, 0x7f634d6e1df0, 0x118, ...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: net/http.(*Transport).dial(0xc0008e92c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0005d1920, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: net/http.(*Transport).dialConn(0xc0008e92c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000014f00, 0x5, 0xc0005d1920, 0x24, 0x0, 0xc0006c4360, ...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: net/http.(*Transport).dialConnFor(0xc0008e92c0, 0xc000018160)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: created by net/http.(*Transport).queueForDial
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: goroutine 162 [runnable]:
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: runtime.Gosched(...)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /usr/local/go/src/runtime/proc.go:271
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0000e1020, 0x0, 0x0)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0009fc8c0)
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 12 23:18:51 old-k8s-version-642238 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 12 23:18:51 old-k8s-version-642238 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 12 23:18:51 old-k8s-version-642238 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (227.804001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-642238" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (456.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-378112 -n embed-certs-378112
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:22:36.35214106 +0000 UTC m=+6823.200523987
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-378112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-378112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.184µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-378112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-378112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-378112 logs -n 25: (1.580224883s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	| start   | -p auto-938961 --memory=3072                           | auto-938961                  | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	| start   | -p kindnet-938961                                      | kindnet-938961               | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:22:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:22:16.385412   69622 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:22:16.385722   69622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:16.385734   69622 out.go:358] Setting ErrFile to fd 2...
	I0912 23:22:16.385740   69622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:16.386030   69622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:22:16.386764   69622 out.go:352] Setting JSON to false
	I0912 23:22:16.388011   69622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7478,"bootTime":1726175858,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:22:16.388095   69622 start.go:139] virtualization: kvm guest
	I0912 23:22:16.390558   69622 out.go:177] * [kindnet-938961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:22:16.391897   69622 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:22:16.391941   69622 notify.go:220] Checking for updates...
	I0912 23:22:16.394550   69622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:22:16.395724   69622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:22:16.396949   69622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:22:16.398566   69622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:22:16.399911   69622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:22:16.401783   69622 config.go:182] Loaded profile config "auto-938961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:16.401909   69622 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:16.402016   69622 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:16.402115   69622 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:22:16.440634   69622 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 23:22:16.442359   69622 start.go:297] selected driver: kvm2
	I0912 23:22:16.442401   69622 start.go:901] validating driver "kvm2" against <nil>
	I0912 23:22:16.442429   69622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:22:16.443213   69622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:16.443348   69622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:22:16.459808   69622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:22:16.459862   69622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 23:22:16.460091   69622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:22:16.460167   69622 cni.go:84] Creating CNI manager for "kindnet"
	I0912 23:22:16.460182   69622 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 23:22:16.460249   69622 start.go:340] cluster config:
	{Name:kindnet-938961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:22:16.460357   69622 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:16.462418   69622 out.go:177] * Starting "kindnet-938961" primary control-plane node in "kindnet-938961" cluster
	I0912 23:22:16.357153   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:16.357839   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:16.357868   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:16.357813   69305 retry.go:31] will retry after 1.426845477s: waiting for machine to come up
	I0912 23:22:17.785878   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:17.786283   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:17.786311   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:17.786247   69305 retry.go:31] will retry after 1.273581755s: waiting for machine to come up
	I0912 23:22:19.061794   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:19.062297   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:19.062322   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:19.062243   69305 retry.go:31] will retry after 1.468830562s: waiting for machine to come up
	I0912 23:22:20.533036   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:20.533516   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:20.533546   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:20.533461   69305 retry.go:31] will retry after 2.236540014s: waiting for machine to come up
	I0912 23:22:16.463513   69622 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:22:16.463554   69622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 23:22:16.463564   69622 cache.go:56] Caching tarball of preloaded images
	I0912 23:22:16.463634   69622 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 23:22:16.463647   69622 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 23:22:16.463783   69622 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kindnet-938961/config.json ...
	I0912 23:22:16.463809   69622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/kindnet-938961/config.json: {Name:mk01c8cebd3def157ffd6e0af943a0480108a051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:16.463964   69622 start.go:360] acquireMachinesLock for kindnet-938961: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:22:22.773077   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:22.773525   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:22.773550   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:22.773487   69305 retry.go:31] will retry after 2.660643157s: waiting for machine to come up
	I0912 23:22:25.437299   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:25.437814   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:25.437850   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:25.437761   69305 retry.go:31] will retry after 3.856544325s: waiting for machine to come up
	I0912 23:22:29.298275   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:29.298774   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find current IP address of domain auto-938961 in network mk-auto-938961
	I0912 23:22:29.298804   69283 main.go:141] libmachine: (auto-938961) DBG | I0912 23:22:29.298727   69305 retry.go:31] will retry after 3.755275635s: waiting for machine to come up
	I0912 23:22:34.818616   69622 start.go:364] duration metric: took 18.354604016s to acquireMachinesLock for "kindnet-938961"
	I0912 23:22:34.818669   69622 start.go:93] Provisioning new machine with config: &{Name:kindnet-938961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:kindnet-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:22:34.818781   69622 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 23:22:33.056503   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.056918   69283 main.go:141] libmachine: (auto-938961) Found IP for machine: 192.168.61.65
	I0912 23:22:33.056943   69283 main.go:141] libmachine: (auto-938961) Reserving static IP address...
	I0912 23:22:33.056958   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has current primary IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.057353   69283 main.go:141] libmachine: (auto-938961) DBG | unable to find host DHCP lease matching {name: "auto-938961", mac: "52:54:00:5a:c5:74", ip: "192.168.61.65"} in network mk-auto-938961
	I0912 23:22:33.135297   69283 main.go:141] libmachine: (auto-938961) DBG | Getting to WaitForSSH function...
	I0912 23:22:33.135329   69283 main.go:141] libmachine: (auto-938961) Reserved static IP address: 192.168.61.65
	I0912 23:22:33.135339   69283 main.go:141] libmachine: (auto-938961) Waiting for SSH to be available...
	I0912 23:22:33.138578   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.139004   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.139034   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.139149   69283 main.go:141] libmachine: (auto-938961) DBG | Using SSH client type: external
	I0912 23:22:33.139177   69283 main.go:141] libmachine: (auto-938961) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa (-rw-------)
	I0912 23:22:33.139213   69283 main.go:141] libmachine: (auto-938961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:22:33.139223   69283 main.go:141] libmachine: (auto-938961) DBG | About to run SSH command:
	I0912 23:22:33.139238   69283 main.go:141] libmachine: (auto-938961) DBG | exit 0
	I0912 23:22:33.261700   69283 main.go:141] libmachine: (auto-938961) DBG | SSH cmd err, output: <nil>: 
	I0912 23:22:33.261986   69283 main.go:141] libmachine: (auto-938961) KVM machine creation complete!
	I0912 23:22:33.262284   69283 main.go:141] libmachine: (auto-938961) Calling .GetConfigRaw
	I0912 23:22:33.263013   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:33.263220   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:33.263433   69283 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 23:22:33.263447   69283 main.go:141] libmachine: (auto-938961) Calling .GetState
	I0912 23:22:33.264810   69283 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 23:22:33.264822   69283 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 23:22:33.264828   69283 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 23:22:33.264834   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:33.267423   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.267885   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.267898   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.268071   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:33.268227   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.268375   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.268514   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:33.268666   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:33.268857   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:33.268872   69283 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 23:22:33.369147   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:22:33.369173   69283 main.go:141] libmachine: Detecting the provisioner...
	I0912 23:22:33.369182   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:33.372523   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.372909   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.372934   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.373083   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:33.373264   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.373468   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.373609   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:33.373829   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:33.374012   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:33.374025   69283 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 23:22:33.474190   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0912 23:22:33.474264   69283 main.go:141] libmachine: found compatible host: buildroot
	I0912 23:22:33.474276   69283 main.go:141] libmachine: Provisioning with buildroot...
	I0912 23:22:33.474289   69283 main.go:141] libmachine: (auto-938961) Calling .GetMachineName
	I0912 23:22:33.474600   69283 buildroot.go:166] provisioning hostname "auto-938961"
	I0912 23:22:33.474624   69283 main.go:141] libmachine: (auto-938961) Calling .GetMachineName
	I0912 23:22:33.474808   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:33.477463   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.477861   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.477890   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.478095   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:33.478325   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.478553   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.478720   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:33.478893   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:33.479056   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:33.479069   69283 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-938961 && echo "auto-938961" | sudo tee /etc/hostname
	I0912 23:22:33.596446   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-938961
	
	I0912 23:22:33.596472   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:33.599934   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.600504   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.600542   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.600691   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:33.600905   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.601117   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:33.601271   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:33.601439   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:33.601608   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:33.601650   69283 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-938961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-938961/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-938961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:22:33.710898   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:22:33.710940   69283 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:22:33.710988   69283 buildroot.go:174] setting up certificates
	I0912 23:22:33.711000   69283 provision.go:84] configureAuth start
	I0912 23:22:33.711016   69283 main.go:141] libmachine: (auto-938961) Calling .GetMachineName
	I0912 23:22:33.711318   69283 main.go:141] libmachine: (auto-938961) Calling .GetIP
	I0912 23:22:33.714214   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.714668   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.714697   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.714828   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:33.717361   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.717734   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:33.717772   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:33.717870   69283 provision.go:143] copyHostCerts
	I0912 23:22:33.717935   69283 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:22:33.717951   69283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:22:33.718036   69283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:22:33.718167   69283 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:22:33.718178   69283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:22:33.718225   69283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:22:33.718321   69283 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:22:33.718331   69283 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:22:33.718365   69283 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:22:33.718446   69283 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.auto-938961 san=[127.0.0.1 192.168.61.65 auto-938961 localhost minikube]
	I0912 23:22:34.198680   69283 provision.go:177] copyRemoteCerts
	I0912 23:22:34.198736   69283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:22:34.198766   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.201560   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.201920   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.201948   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.202103   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.202285   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.202456   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.202551   69283 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa Username:docker}
	I0912 23:22:34.284223   69283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:22:34.307962   69283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0912 23:22:34.332594   69283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:22:34.356451   69283 provision.go:87] duration metric: took 645.43534ms to configureAuth
	I0912 23:22:34.356496   69283 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:22:34.356656   69283 config.go:182] Loaded profile config "auto-938961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:34.356734   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.359387   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.359691   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.359722   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.359859   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.360035   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.360195   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.360300   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.360434   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:34.360626   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:34.360643   69283 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:22:34.580454   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:22:34.580477   69283 main.go:141] libmachine: Checking connection to Docker...
	I0912 23:22:34.580485   69283 main.go:141] libmachine: (auto-938961) Calling .GetURL
	I0912 23:22:34.581928   69283 main.go:141] libmachine: (auto-938961) DBG | Using libvirt version 6000000
	I0912 23:22:34.584321   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.584752   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.584782   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.584958   69283 main.go:141] libmachine: Docker is up and running!
	I0912 23:22:34.584978   69283 main.go:141] libmachine: Reticulating splines...
	I0912 23:22:34.584986   69283 client.go:171] duration metric: took 23.776611434s to LocalClient.Create
	I0912 23:22:34.585013   69283 start.go:167] duration metric: took 23.776685059s to libmachine.API.Create "auto-938961"
	I0912 23:22:34.585024   69283 start.go:293] postStartSetup for "auto-938961" (driver="kvm2")
	I0912 23:22:34.585036   69283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:22:34.585052   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:34.585287   69283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:22:34.585327   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.587601   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.587964   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.587991   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.588354   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.588554   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.588715   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.588904   69283 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa Username:docker}
	I0912 23:22:34.668674   69283 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:22:34.673127   69283 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:22:34.673155   69283 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:22:34.673227   69283 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:22:34.673346   69283 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:22:34.673476   69283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:22:34.683091   69283 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:22:34.708772   69283 start.go:296] duration metric: took 123.731569ms for postStartSetup
	I0912 23:22:34.708831   69283 main.go:141] libmachine: (auto-938961) Calling .GetConfigRaw
	I0912 23:22:34.709577   69283 main.go:141] libmachine: (auto-938961) Calling .GetIP
	I0912 23:22:34.712878   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.713293   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.713323   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.713688   69283 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/auto-938961/config.json ...
	I0912 23:22:34.713915   69283 start.go:128] duration metric: took 23.924485787s to createHost
	I0912 23:22:34.713945   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.716620   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.717079   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.717138   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.717247   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.717445   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.717606   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.717755   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.717932   69283 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:34.718143   69283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0912 23:22:34.718161   69283 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:22:34.818443   69283 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726183354.791116651
	
	I0912 23:22:34.818466   69283 fix.go:216] guest clock: 1726183354.791116651
	I0912 23:22:34.818477   69283 fix.go:229] Guest: 2024-09-12 23:22:34.791116651 +0000 UTC Remote: 2024-09-12 23:22:34.713931495 +0000 UTC m=+24.037239978 (delta=77.185156ms)
	I0912 23:22:34.818502   69283 fix.go:200] guest clock delta is within tolerance: 77.185156ms
	I0912 23:22:34.818509   69283 start.go:83] releasing machines lock for "auto-938961", held for 24.029203542s
	I0912 23:22:34.818542   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:34.818853   69283 main.go:141] libmachine: (auto-938961) Calling .GetIP
	I0912 23:22:34.822034   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.822406   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.822436   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.822593   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:34.823217   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:34.823443   69283 main.go:141] libmachine: (auto-938961) Calling .DriverName
	I0912 23:22:34.823548   69283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:22:34.823601   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.823662   69283 ssh_runner.go:195] Run: cat /version.json
	I0912 23:22:34.823688   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHHostname
	I0912 23:22:34.826639   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.826668   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.826965   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.826990   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.827044   69283 main.go:141] libmachine: (auto-938961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:c5:74", ip: ""} in network mk-auto-938961: {Iface:virbr3 ExpiryTime:2024-09-13 00:22:25 +0000 UTC Type:0 Mac:52:54:00:5a:c5:74 Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:auto-938961 Clientid:01:52:54:00:5a:c5:74}
	I0912 23:22:34.827068   69283 main.go:141] libmachine: (auto-938961) DBG | domain auto-938961 has defined IP address 192.168.61.65 and MAC address 52:54:00:5a:c5:74 in network mk-auto-938961
	I0912 23:22:34.827111   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.827323   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHPort
	I0912 23:22:34.827324   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.827522   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHKeyPath
	I0912 23:22:34.827528   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.827702   69283 main.go:141] libmachine: (auto-938961) Calling .GetSSHUsername
	I0912 23:22:34.827694   69283 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa Username:docker}
	I0912 23:22:34.827856   69283 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/auto-938961/id_rsa Username:docker}
	I0912 23:22:34.942304   69283 ssh_runner.go:195] Run: systemctl --version
	I0912 23:22:34.949758   69283 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:22:35.117780   69283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:22:35.124110   69283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:22:35.124187   69283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:22:35.140822   69283 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:22:35.140853   69283 start.go:495] detecting cgroup driver to use...
	I0912 23:22:35.140919   69283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:22:35.158805   69283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:22:35.174571   69283 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:22:35.174638   69283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:22:35.190356   69283 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:22:35.204880   69283 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:22:35.324699   69283 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:22:35.473433   69283 docker.go:233] disabling docker service ...
	I0912 23:22:35.473525   69283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:22:35.488021   69283 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:22:35.502672   69283 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:22:35.663787   69283 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:22:35.790193   69283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:22:35.805031   69283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:22:35.824504   69283 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:22:35.824572   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.837461   69283 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:22:35.837537   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.850803   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.862156   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.875485   69283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:22:35.888044   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.900371   69283 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.922782   69283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:22:35.935041   69283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:22:35.946691   69283 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:22:35.946750   69283 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:22:35.962443   69283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:22:35.974584   69283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:22:36.116176   69283 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:22:36.240569   69283 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:22:36.240629   69283 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:22:36.247054   69283 start.go:563] Will wait 60s for crictl version
	I0912 23:22:36.247103   69283 ssh_runner.go:195] Run: which crictl
	I0912 23:22:36.251703   69283 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:22:36.305740   69283 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:22:36.305820   69283 ssh_runner.go:195] Run: crio --version
	I0912 23:22:36.341054   69283 ssh_runner.go:195] Run: crio --version
	I0912 23:22:36.378197   69283 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:22:34.821307   69622 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 23:22:34.821513   69622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:22:34.821585   69622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:22:34.841882   69622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42035
	I0912 23:22:34.842411   69622 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:22:34.843073   69622 main.go:141] libmachine: Using API Version  1
	I0912 23:22:34.843099   69622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:22:34.843505   69622 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:22:34.843770   69622 main.go:141] libmachine: (kindnet-938961) Calling .GetMachineName
	I0912 23:22:34.844018   69622 main.go:141] libmachine: (kindnet-938961) Calling .DriverName
	I0912 23:22:34.844256   69622 start.go:159] libmachine.API.Create for "kindnet-938961" (driver="kvm2")
	I0912 23:22:34.844287   69622 client.go:168] LocalClient.Create starting
	I0912 23:22:34.844325   69622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 23:22:34.844428   69622 main.go:141] libmachine: Decoding PEM data...
	I0912 23:22:34.844454   69622 main.go:141] libmachine: Parsing certificate...
	I0912 23:22:34.844528   69622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 23:22:34.844558   69622 main.go:141] libmachine: Decoding PEM data...
	I0912 23:22:34.844575   69622 main.go:141] libmachine: Parsing certificate...
	I0912 23:22:34.844604   69622 main.go:141] libmachine: Running pre-create checks...
	I0912 23:22:34.844617   69622 main.go:141] libmachine: (kindnet-938961) Calling .PreCreateCheck
	I0912 23:22:34.844988   69622 main.go:141] libmachine: (kindnet-938961) Calling .GetConfigRaw
	I0912 23:22:34.845395   69622 main.go:141] libmachine: Creating machine...
	I0912 23:22:34.845415   69622 main.go:141] libmachine: (kindnet-938961) Calling .Create
	I0912 23:22:34.845577   69622 main.go:141] libmachine: (kindnet-938961) Creating KVM machine...
	I0912 23:22:34.847194   69622 main.go:141] libmachine: (kindnet-938961) DBG | found existing default KVM network
	I0912 23:22:34.849167   69622 main.go:141] libmachine: (kindnet-938961) DBG | I0912 23:22:34.849000   69746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0912 23:22:34.849208   69622 main.go:141] libmachine: (kindnet-938961) DBG | created network xml: 
	I0912 23:22:34.849226   69622 main.go:141] libmachine: (kindnet-938961) DBG | <network>
	I0912 23:22:34.849236   69622 main.go:141] libmachine: (kindnet-938961) DBG |   <name>mk-kindnet-938961</name>
	I0912 23:22:34.849244   69622 main.go:141] libmachine: (kindnet-938961) DBG |   <dns enable='no'/>
	I0912 23:22:34.849252   69622 main.go:141] libmachine: (kindnet-938961) DBG |   
	I0912 23:22:34.849280   69622 main.go:141] libmachine: (kindnet-938961) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0912 23:22:34.849288   69622 main.go:141] libmachine: (kindnet-938961) DBG |     <dhcp>
	I0912 23:22:34.849301   69622 main.go:141] libmachine: (kindnet-938961) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0912 23:22:34.849310   69622 main.go:141] libmachine: (kindnet-938961) DBG |     </dhcp>
	I0912 23:22:34.849324   69622 main.go:141] libmachine: (kindnet-938961) DBG |   </ip>
	I0912 23:22:34.849335   69622 main.go:141] libmachine: (kindnet-938961) DBG |   
	I0912 23:22:34.849345   69622 main.go:141] libmachine: (kindnet-938961) DBG | </network>
	I0912 23:22:34.849355   69622 main.go:141] libmachine: (kindnet-938961) DBG | 
	I0912 23:22:34.855419   69622 main.go:141] libmachine: (kindnet-938961) DBG | trying to create private KVM network mk-kindnet-938961 192.168.39.0/24...
	I0912 23:22:34.932077   69622 main.go:141] libmachine: (kindnet-938961) DBG | private KVM network mk-kindnet-938961 192.168.39.0/24 created
	I0912 23:22:34.932107   69622 main.go:141] libmachine: (kindnet-938961) DBG | I0912 23:22:34.932018   69746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:22:34.932124   69622 main.go:141] libmachine: (kindnet-938961) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961 ...
	I0912 23:22:34.932141   69622 main.go:141] libmachine: (kindnet-938961) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 23:22:34.932156   69622 main.go:141] libmachine: (kindnet-938961) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 23:22:35.184497   69622 main.go:141] libmachine: (kindnet-938961) DBG | I0912 23:22:35.184363   69746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961/id_rsa...
	I0912 23:22:35.340023   69622 main.go:141] libmachine: (kindnet-938961) DBG | I0912 23:22:35.339876   69746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961/kindnet-938961.rawdisk...
	I0912 23:22:35.340049   69622 main.go:141] libmachine: (kindnet-938961) DBG | Writing magic tar header
	I0912 23:22:35.340063   69622 main.go:141] libmachine: (kindnet-938961) DBG | Writing SSH key tar header
	I0912 23:22:35.340076   69622 main.go:141] libmachine: (kindnet-938961) DBG | I0912 23:22:35.340030   69746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961 ...
	I0912 23:22:35.340173   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961
	I0912 23:22:35.340220   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961 (perms=drwx------)
	I0912 23:22:35.340243   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube/machines (perms=drwxr-xr-x)
	I0912 23:22:35.340258   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube/machines
	I0912 23:22:35.340273   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:22:35.340291   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19616-5891
	I0912 23:22:35.340301   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891/.minikube (perms=drwxr-xr-x)
	I0912 23:22:35.340312   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins/minikube-integration/19616-5891 (perms=drwxrwxr-x)
	I0912 23:22:35.340325   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 23:22:35.340338   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 23:22:35.340363   69622 main.go:141] libmachine: (kindnet-938961) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 23:22:35.340383   69622 main.go:141] libmachine: (kindnet-938961) Creating domain...
	I0912 23:22:35.340398   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home/jenkins
	I0912 23:22:35.340419   69622 main.go:141] libmachine: (kindnet-938961) DBG | Checking permissions on dir: /home
	I0912 23:22:35.340437   69622 main.go:141] libmachine: (kindnet-938961) DBG | Skipping /home - not owner
	I0912 23:22:35.341725   69622 main.go:141] libmachine: (kindnet-938961) define libvirt domain using xml: 
	I0912 23:22:35.341748   69622 main.go:141] libmachine: (kindnet-938961) <domain type='kvm'>
	I0912 23:22:35.341756   69622 main.go:141] libmachine: (kindnet-938961)   <name>kindnet-938961</name>
	I0912 23:22:35.341762   69622 main.go:141] libmachine: (kindnet-938961)   <memory unit='MiB'>3072</memory>
	I0912 23:22:35.341771   69622 main.go:141] libmachine: (kindnet-938961)   <vcpu>2</vcpu>
	I0912 23:22:35.341777   69622 main.go:141] libmachine: (kindnet-938961)   <features>
	I0912 23:22:35.341785   69622 main.go:141] libmachine: (kindnet-938961)     <acpi/>
	I0912 23:22:35.341800   69622 main.go:141] libmachine: (kindnet-938961)     <apic/>
	I0912 23:22:35.341809   69622 main.go:141] libmachine: (kindnet-938961)     <pae/>
	I0912 23:22:35.341820   69622 main.go:141] libmachine: (kindnet-938961)     
	I0912 23:22:35.341833   69622 main.go:141] libmachine: (kindnet-938961)   </features>
	I0912 23:22:35.341841   69622 main.go:141] libmachine: (kindnet-938961)   <cpu mode='host-passthrough'>
	I0912 23:22:35.341847   69622 main.go:141] libmachine: (kindnet-938961)   
	I0912 23:22:35.341851   69622 main.go:141] libmachine: (kindnet-938961)   </cpu>
	I0912 23:22:35.341856   69622 main.go:141] libmachine: (kindnet-938961)   <os>
	I0912 23:22:35.341863   69622 main.go:141] libmachine: (kindnet-938961)     <type>hvm</type>
	I0912 23:22:35.341869   69622 main.go:141] libmachine: (kindnet-938961)     <boot dev='cdrom'/>
	I0912 23:22:35.341880   69622 main.go:141] libmachine: (kindnet-938961)     <boot dev='hd'/>
	I0912 23:22:35.341891   69622 main.go:141] libmachine: (kindnet-938961)     <bootmenu enable='no'/>
	I0912 23:22:35.341901   69622 main.go:141] libmachine: (kindnet-938961)   </os>
	I0912 23:22:35.341910   69622 main.go:141] libmachine: (kindnet-938961)   <devices>
	I0912 23:22:35.341920   69622 main.go:141] libmachine: (kindnet-938961)     <disk type='file' device='cdrom'>
	I0912 23:22:35.341934   69622 main.go:141] libmachine: (kindnet-938961)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961/boot2docker.iso'/>
	I0912 23:22:35.341944   69622 main.go:141] libmachine: (kindnet-938961)       <target dev='hdc' bus='scsi'/>
	I0912 23:22:35.341959   69622 main.go:141] libmachine: (kindnet-938961)       <readonly/>
	I0912 23:22:35.341968   69622 main.go:141] libmachine: (kindnet-938961)     </disk>
	I0912 23:22:35.341977   69622 main.go:141] libmachine: (kindnet-938961)     <disk type='file' device='disk'>
	I0912 23:22:35.341995   69622 main.go:141] libmachine: (kindnet-938961)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 23:22:35.342016   69622 main.go:141] libmachine: (kindnet-938961)       <source file='/home/jenkins/minikube-integration/19616-5891/.minikube/machines/kindnet-938961/kindnet-938961.rawdisk'/>
	I0912 23:22:35.342031   69622 main.go:141] libmachine: (kindnet-938961)       <target dev='hda' bus='virtio'/>
	I0912 23:22:35.342041   69622 main.go:141] libmachine: (kindnet-938961)     </disk>
	I0912 23:22:35.342051   69622 main.go:141] libmachine: (kindnet-938961)     <interface type='network'>
	I0912 23:22:35.342063   69622 main.go:141] libmachine: (kindnet-938961)       <source network='mk-kindnet-938961'/>
	I0912 23:22:35.342072   69622 main.go:141] libmachine: (kindnet-938961)       <model type='virtio'/>
	I0912 23:22:35.342080   69622 main.go:141] libmachine: (kindnet-938961)     </interface>
	I0912 23:22:35.342090   69622 main.go:141] libmachine: (kindnet-938961)     <interface type='network'>
	I0912 23:22:35.342099   69622 main.go:141] libmachine: (kindnet-938961)       <source network='default'/>
	I0912 23:22:35.342109   69622 main.go:141] libmachine: (kindnet-938961)       <model type='virtio'/>
	I0912 23:22:35.342118   69622 main.go:141] libmachine: (kindnet-938961)     </interface>
	I0912 23:22:35.342132   69622 main.go:141] libmachine: (kindnet-938961)     <serial type='pty'>
	I0912 23:22:35.342143   69622 main.go:141] libmachine: (kindnet-938961)       <target port='0'/>
	I0912 23:22:35.342154   69622 main.go:141] libmachine: (kindnet-938961)     </serial>
	I0912 23:22:35.342162   69622 main.go:141] libmachine: (kindnet-938961)     <console type='pty'>
	I0912 23:22:35.342173   69622 main.go:141] libmachine: (kindnet-938961)       <target type='serial' port='0'/>
	I0912 23:22:35.342182   69622 main.go:141] libmachine: (kindnet-938961)     </console>
	I0912 23:22:35.342191   69622 main.go:141] libmachine: (kindnet-938961)     <rng model='virtio'>
	I0912 23:22:35.342220   69622 main.go:141] libmachine: (kindnet-938961)       <backend model='random'>/dev/random</backend>
	I0912 23:22:35.342247   69622 main.go:141] libmachine: (kindnet-938961)     </rng>
	I0912 23:22:35.342259   69622 main.go:141] libmachine: (kindnet-938961)     
	I0912 23:22:35.342266   69622 main.go:141] libmachine: (kindnet-938961)     
	I0912 23:22:35.342275   69622 main.go:141] libmachine: (kindnet-938961)   </devices>
	I0912 23:22:35.342288   69622 main.go:141] libmachine: (kindnet-938961) </domain>
	I0912 23:22:35.342301   69622 main.go:141] libmachine: (kindnet-938961) 
	I0912 23:22:35.347011   69622 main.go:141] libmachine: (kindnet-938961) DBG | domain kindnet-938961 has defined MAC address 52:54:00:41:3d:ee in network default
	I0912 23:22:35.347698   69622 main.go:141] libmachine: (kindnet-938961) DBG | domain kindnet-938961 has defined MAC address 52:54:00:03:65:01 in network mk-kindnet-938961
	I0912 23:22:35.347727   69622 main.go:141] libmachine: (kindnet-938961) Ensuring networks are active...
	I0912 23:22:35.348629   69622 main.go:141] libmachine: (kindnet-938961) Ensuring network default is active
	I0912 23:22:35.349046   69622 main.go:141] libmachine: (kindnet-938961) Ensuring network mk-kindnet-938961 is active
	I0912 23:22:35.349786   69622 main.go:141] libmachine: (kindnet-938961) Getting domain xml...
	I0912 23:22:35.350626   69622 main.go:141] libmachine: (kindnet-938961) Creating domain...
	
	
	==> CRI-O <==
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.239486690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183357239439300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=210da3e6-9ed0-4b47-858f-13c8301f6609 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.240321581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=697365d4-ae57-4eb3-9460-03899db53dc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.240428490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=697365d4-ae57-4eb3-9460-03899db53dc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.240771822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=697365d4-ae57-4eb3-9460-03899db53dc7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.252878970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23f6ff45-b6d0-4e8f-9a8c-b565f84770ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.253005483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23f6ff45-b6d0-4e8f-9a8c-b565f84770ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.253294154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182123153255983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,PodSandboxId:01bfe26a78e45f77488fc831b37f2ece2ba5826151a49d77cc85132fa5292880,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182103061405869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,PodSandboxId:8f96256aac3db0033853f6deee9a8ce0e888a33743507d6efd873689491e7a5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182100061946712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,PodSandboxId:dbdcc135a5ea52851aaa4633c8f13d8d827a9ec52abf10d66dd1cf255f1327e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182092323356857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a
9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f,PodSandboxId:2fb05fcc4e0e9920e2d59727a2cc76564e7d79c6fa20bb4360c55a088b1d3be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182092301286319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0ba
f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,PodSandboxId:9bcfe02b74318c91cb7753956f427d79a4071e45141830c9959f59e49bb3419c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182088642330869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,PodSandboxId:2aaeb742345d1afdd923ef084f1923fff9f772f7a9881851bba29c3e952d05bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182088638381922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,PodSandboxId:c884f0f2f98b0f1784695585a6347618f05884233214587af251a66ba47cfeb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182088606320699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,PodSandboxId:a42aeaf3e710a4ec4209796224494d9e1920866a81e68dee43aee7dcc6871eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182088617828827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23f6ff45-b6d0-4e8f-9a8c-b565f84770ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.254485339Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ced0da40-e695-4f77-b5bd-5529a3591e24 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.254797420Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1726182123210360176,StartedAt:1726182123237827733,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d/containers/storage-provisioner/e5638d20,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d/volumes/kubernetes.io~projected/kube-api-access-p6s47,Readonly:true,SelinuxRelabel:fal
se,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d/storage-provisioner/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ced0da40-e695-4f77-b5bd-5529a3591e24 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.256043380Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fe8e08b2-6548-46c6-81e9-0c88787aacf1 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.256236565Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:580f45d8e367ee2eb48f1a7950e3f57eb992f6ed5e039800e7b69459dc172d25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182103111989666,StartedAt:1726182103145331725,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68c26c3e-1c5b-4b9c-8316-020988da7706,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/68c26c3e-1c5b-4b9c-8316-020988da7706/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/68c26c3e-1c5b-4b9c-8316-020988da7706/containers/busybox/07cbcb5c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/68c26c3e-1c5b-4b9c-8316-020988da7706/volumes/kubernetes.io~projected/kube-api-access-vkllg,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox_68c26c3e-1c5b-4b9c-8316-020988da7706/busybox/1.log,Resources:&ContainerResources{Linux:
&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=fe8e08b2-6548-46c6-81e9-0c88787aacf1 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.257633078Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,Verbose:false,}" file="otel-collector/interceptors.go:62" id=572763ad-888b-4623-8ca1-794b722bf7c5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.258364510Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182100125052831,StartedAt:1726182100157022492,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m8t6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93c63198-ebd2-4e88-9be8-912425b1eb84,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/93c63198-ebd2-4e88-9be8-912425b1eb84/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/93c63198-ebd2-4e88-9be8-912425b1eb84/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/93c63198-ebd2-4e88-9be8-912425b1eb84/containers/coredns/7f2eea56,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/93c63198-ebd2-4e88-9be8-912425b1eb84/volumes/kubernetes.io~projected/kube-api-access-t64r8,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-m8t6h_93c63198-ebd2-4e88-9be8-912425b1eb84/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=572763ad-888b-4623-8ca1-794b722bf7c5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.258727578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d540fc0-3a17-4630-b15f-eb221d424644 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.259275771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183357259249863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d540fc0-3a17-4630-b15f-eb221d424644 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.259923951Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f4f04a09-44b5-45f5-8fdc-7d8955924b65 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.260050075Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182092451145770,StartedAt:1726182092484847743,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fvbbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b172754e-bb5a-40ba-a9be-a7632081defc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b172754e-bb5a-40ba-a9be-a7632081defc/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b172754e-bb5a-40ba-a9be-a7632081defc/containers/kube-proxy/2a99e436,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var
/lib/kubelet/pods/b172754e-bb5a-40ba-a9be-a7632081defc/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b172754e-bb5a-40ba-a9be-a7632081defc/volumes/kubernetes.io~projected/kube-api-access-9576h,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-fvbbq_b172754e-bb5a-40ba-a9be-a7632081defc/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-
collector/interceptors.go:74" id=f4f04a09-44b5-45f5-8fdc-7d8955924b65 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.260466283Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1d393f2e-8ce6-43fb-b3d0-beff7fb70c6d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.260678344Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182088757361790,StartedAt:1726182088845745518,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ed9552d16c564610caec50232e36dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d9ed9552d16c564610caec50232e36dc/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d9ed9552d16c564610caec50232e36dc/containers/kube-apiserver/a45ac93c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Conta
inerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-378112_d9ed9552d16c564610caec50232e36dc/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1d393f2e-8ce6-43fb-b3d0-beff7fb70c6d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.261307764Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,Verbose:false,}" file="otel-collector/interceptors.go:62" id=228ed7af-5ef5-4563-bd7f-b9400e544fec name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.261425935Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182088744637256,StartedAt:1726182088875134953,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4afeaa41ef3d550a5d04908f01cf2197,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4afeaa41ef3d550a5d04908f01cf2197/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4afeaa41ef3d550a5d04908f01cf2197/containers/kube-scheduler/18b14bda,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-378112_4afeaa41ef3d550a5d04908f01cf2197/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{Cp
uPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=228ed7af-5ef5-4563-bd7f-b9400e544fec name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.261908701Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,Verbose:false,}" file="otel-collector/interceptors.go:62" id=eb78f121-5ed4-4b4d-b9f1-15c10cc094ca name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.262066418Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182088724279796,StartedAt:1726182088809252247,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b799bcd1cc9be5e956c3ddd45af143,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/16b799bcd1cc9be5e956c3ddd45af143/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/16b799bcd1cc9be5e956c3ddd45af143/containers/kube-controller-manager/40a1afe2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-378112_16b799bcd1cc9be5e956c3ddd45af143/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cpus
etMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=eb78f121-5ed4-4b4d-b9f1-15c10cc094ca name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.262472461Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=049ca4e4-8ffb-4460-a2ff-54721b0eac9b name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 12 23:22:37 embed-certs-378112 crio[714]: time="2024-09-12 23:22:37.263562514Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1726182088686873199,StartedAt:1726182088767674955,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-378112,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0bb257a34a1f166fb9f89281b2e1d6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fc0bb257a34a1f166fb9f89281b2e1d6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fc0bb257a34a1f166fb9f89281b2e1d6/containers/etcd/337a8204,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etc
d-embed-certs-378112_fc0bb257a34a1f166fb9f89281b2e1d6/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=049ca4e4-8ffb-4460-a2ff-54721b0eac9b name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e48efc9ba5a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2fb05fcc4e0e9       storage-provisioner
	580f45d8e367e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   01bfe26a78e45       busybox
	7841230606daf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   8f96256aac3db       coredns-7c65d6cfc9-m8t6h
	0b058233860f2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   dbdcc135a5ea5       kube-proxy-fvbbq
	fdb0e5ac691d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   2fb05fcc4e0e9       storage-provisioner
	115e1e7911747       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   9bcfe02b74318       kube-apiserver-embed-certs-378112
	dc8c605cca940       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   2aaeb742345d1       kube-scheduler-embed-certs-378112
	e099ac110cb9e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   a42aeaf3e710a       etcd-embed-certs-378112
	54dd60703518d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   c884f0f2f98b0       kube-controller-manager-embed-certs-378112
	
	
	==> coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44653 - 22529 "HINFO IN 3919299564452992292.7808051720423804999. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016593259s
	
	
	==> describe nodes <==
	Name:               embed-certs-378112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-378112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=embed-certs-378112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_53_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-378112
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:22:26 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:22:26 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:22:26 +0000   Thu, 12 Sep 2024 22:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:22:26 +0000   Thu, 12 Sep 2024 23:01:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.96
	  Hostname:    embed-certs-378112
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9369d9e2546b42da98f24b39f498ebc3
	  System UUID:                9369d9e2-546b-42da-98f2-4b39f498ebc3
	  Boot ID:                    06852740-91cc-48d4-a2c3-758e0899e521
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-m8t6h                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-378112                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-embed-certs-378112             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-embed-certs-378112    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-fvbbq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-378112             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-kvpqz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-378112 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-378112 event: Registered Node embed-certs-378112 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-378112 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-378112 event: Registered Node embed-certs-378112 in Controller
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037907] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752233] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.943136] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.519348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.912279] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060822] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.191597] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.146916] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.299291] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +3.920395] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +1.643387] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.062512] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.515203] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.494025] systemd-fstab-generator[1549]: Ignoring "noauto" option for root device
	[  +3.325406] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.041834] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] <==
	{"level":"info","ts":"2024-09-12T23:01:45.439460Z","caller":"traceutil/trace.go:171","msg":"trace[1635517596] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"115.628835ms","start":"2024-09-12T23:01:45.323813Z","end":"2024-09-12T23:01:45.439442Z","steps":["trace[1635517596] 'process raft request'  (duration: 115.347888ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T23:01:46.430050Z","caller":"traceutil/trace.go:171","msg":"trace[517608903] linearizableReadLoop","detail":"{readStateIndex:690; appliedIndex:689; }","duration":"240.032926ms","start":"2024-09-12T23:01:46.189999Z","end":"2024-09-12T23:01:46.430032Z","steps":["trace[517608903] 'read index received'  (duration: 239.829559ms)","trace[517608903] 'applied index is now lower than readState.Index'  (duration: 202.647µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T23:01:46.430255Z","caller":"traceutil/trace.go:171","msg":"trace[1572406179] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"288.053964ms","start":"2024-09-12T23:01:46.142191Z","end":"2024-09-12T23:01:46.430245Z","steps":["trace[1572406179] 'process raft request'  (duration: 287.692576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:46.430441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.425827ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:01:46.430525Z","caller":"traceutil/trace.go:171","msg":"trace[1748289074] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:653; }","duration":"240.539133ms","start":"2024-09-12T23:01:46.189976Z","end":"2024-09-12T23:01:46.430515Z","steps":["trace[1748289074] 'agreement among raft nodes before linearized reading'  (duration: 240.418352ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:47.056708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.83119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888013096436442791 > lease_revoke:<id:1a3391e871484941>","response":"size:27"}
	{"level":"info","ts":"2024-09-12T23:01:47.056782Z","caller":"traceutil/trace.go:171","msg":"trace[1497464155] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:690; }","duration":"381.337182ms","start":"2024-09-12T23:01:46.675432Z","end":"2024-09-12T23:01:47.056769Z","steps":["trace[1497464155] 'read index received'  (duration: 142.187766ms)","trace[1497464155] 'applied index is now lower than readState.Index'  (duration: 239.148278ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-12T23:01:47.056966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.488695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-378112\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-09-12T23:01:47.056999Z","caller":"traceutil/trace.go:171","msg":"trace[469274308] range","detail":"{range_begin:/registry/minions/embed-certs-378112; range_end:; response_count:1; response_revision:653; }","duration":"381.561469ms","start":"2024-09-12T23:01:46.675428Z","end":"2024-09-12T23:01:47.056989Z","steps":["trace[469274308] 'agreement among raft nodes before linearized reading'  (duration: 381.405277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:47.057026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T23:01:46.675386Z","time spent":"381.6328ms","remote":"127.0.0.1:43478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5508,"request content":"key:\"/registry/minions/embed-certs-378112\" "}
	{"level":"warn","ts":"2024-09-12T23:01:47.057228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.836698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:01:47.057255Z","caller":"traceutil/trace.go:171","msg":"trace[1108297500] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:653; }","duration":"152.864604ms","start":"2024-09-12T23:01:46.904382Z","end":"2024-09-12T23:01:47.057247Z","steps":["trace[1108297500] 'agreement among raft nodes before linearized reading'  (duration: 152.818115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:01:48.214290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.762963ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1888013096436442803 > lease_revoke:<id:1a3391e878612528>","response":"size:27"}
	{"level":"info","ts":"2024-09-12T23:02:26.894868Z","caller":"traceutil/trace.go:171","msg":"trace[2058425542] transaction","detail":"{read_only:false; response_revision:687; number_of_response:1; }","duration":"181.147525ms","start":"2024-09-12T23:02:26.713696Z","end":"2024-09-12T23:02:26.894843Z","steps":["trace[2058425542] 'process raft request'  (duration: 181.033849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:02:27.324793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.309613ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:02:27.324930Z","caller":"traceutil/trace.go:171","msg":"trace[1163119828] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:687; }","duration":"135.499004ms","start":"2024-09-12T23:02:27.189421Z","end":"2024-09-12T23:02:27.324920Z","steps":["trace[1163119828] 'range keys from in-memory index tree'  (duration: 135.253546ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T23:11:30.141259Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2024-09-12T23:11:30.151012Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":897,"took":"9.429136ms","hash":929492379,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2760704,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-12T23:11:30.151081Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":929492379,"revision":897,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T23:16:30.148200Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1139}
	{"level":"info","ts":"2024-09-12T23:16:30.154383Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1139,"took":"4.405199ms","hash":3210515519,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-12T23:16:30.154466Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3210515519,"revision":1139,"compact-revision":897}
	{"level":"info","ts":"2024-09-12T23:21:30.154475Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1383}
	{"level":"info","ts":"2024-09-12T23:21:30.158078Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1383,"took":"3.251967ms","hash":988461341,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-12T23:21:30.158127Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":988461341,"revision":1383,"compact-revision":1139}
	
	
	==> kernel <==
	 23:22:37 up 21 min,  0 users,  load average: 0.14, 0.14, 0.13
	Linux embed-certs-378112 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] <==
	I0912 23:19:32.429941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:19:32.429989       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:21:31.427201       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:21:31.427320       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0912 23:21:32.429455       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:21:32.429540       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0912 23:21:32.429469       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:21:32.429625       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:21:32.430754       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:21:32.430789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:22:32.431704       1 handler_proxy.go:99] no RequestInfo found in the context
	W0912 23:22:32.431704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:22:32.431802       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0912 23:22:32.431850       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:22:32.432988       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:22:32.433063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] <==
	E0912 23:17:35.158121       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:17:35.668270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:17:43.983653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="236.361µs"
	I0912 23:17:54.979197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="53.2µs"
	E0912 23:18:05.164620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:18:05.677177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:18:35.170840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:18:35.684838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:19:05.176813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:19:05.691634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:19:35.182705       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:19:35.699027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:20:05.187942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:05.706079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:20:35.195108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:35.714866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:05.201958       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:05.722156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:35.208804       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:35.730560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:22:05.215405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:22:05.737974       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:22:26.349875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-378112"
	E0912 23:22:35.223011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:22:35.747188       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:01:32.663972       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:01:32.673692       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.96"]
	E0912 23:01:32.673789       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:01:32.702258       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:01:32.702316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:01:32.702339       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:01:32.704505       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:01:32.704869       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:01:32.704890       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:01:32.706254       1 config.go:199] "Starting service config controller"
	I0912 23:01:32.706299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:01:32.706327       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:01:32.706345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:01:32.706898       1 config.go:328] "Starting node config controller"
	I0912 23:01:32.706922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:01:32.806375       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:01:32.806436       1 shared_informer.go:320] Caches are synced for service config
	I0912 23:01:32.807146       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] <==
	I0912 23:01:29.698093       1 serving.go:386] Generated self-signed cert in-memory
	W0912 23:01:31.359665       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 23:01:31.359833       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 23:01:31.359863       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 23:01:31.359931       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 23:01:31.429556       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0912 23:01:31.429716       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:01:31.440887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0912 23:01:31.441061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 23:01:31.441108       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:01:31.441140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0912 23:01:31.541805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:21:27 embed-certs-378112 kubelet[920]: E0912 23:21:27.239499     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183287238898071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:28 embed-certs-378112 kubelet[920]: E0912 23:21:28.965973     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:21:37 embed-certs-378112 kubelet[920]: E0912 23:21:37.242216     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183297241545911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:37 embed-certs-378112 kubelet[920]: E0912 23:21:37.242657     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183297241545911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:41 embed-certs-378112 kubelet[920]: E0912 23:21:41.966039     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:21:47 embed-certs-378112 kubelet[920]: E0912 23:21:47.243795     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183307243507692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:47 embed-certs-378112 kubelet[920]: E0912 23:21:47.243818     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183307243507692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:53 embed-certs-378112 kubelet[920]: E0912 23:21:53.965496     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:21:57 embed-certs-378112 kubelet[920]: E0912 23:21:57.246178     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183317245854479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:57 embed-certs-378112 kubelet[920]: E0912 23:21:57.246217     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183317245854479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:07 embed-certs-378112 kubelet[920]: E0912 23:22:07.252413     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183327248540445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:07 embed-certs-378112 kubelet[920]: E0912 23:22:07.252894     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183327248540445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:07 embed-certs-378112 kubelet[920]: E0912 23:22:07.966851     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:22:17 embed-certs-378112 kubelet[920]: E0912 23:22:17.254750     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183337254200673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:17 embed-certs-378112 kubelet[920]: E0912 23:22:17.255161     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183337254200673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:22 embed-certs-378112 kubelet[920]: E0912 23:22:22.967857     920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kvpqz" podUID="04e47cfd-bada-4cbd-8792-db4edebfb282"
	Sep 12 23:22:26 embed-certs-378112 kubelet[920]: E0912 23:22:26.979036     920 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:22:26 embed-certs-378112 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:22:26 embed-certs-378112 kubelet[920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:22:26 embed-certs-378112 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:22:26 embed-certs-378112 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:22:27 embed-certs-378112 kubelet[920]: E0912 23:22:27.256493     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183347256263335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:27 embed-certs-378112 kubelet[920]: E0912 23:22:27.256516     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183347256263335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:37 embed-certs-378112 kubelet[920]: E0912 23:22:37.259708     920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183357259249863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:37 embed-certs-378112 kubelet[920]: E0912 23:22:37.259773     920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183357259249863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] <==
	I0912 23:02:03.256714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:02:03.267838       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:02:03.268069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:02:20.669516       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:02:20.669976       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37!
	I0912 23:02:20.670285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a2a2cd0-d331-47b6-b689-eee87ed80181", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37 became leader
	I0912 23:02:20.770495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-378112_cbcafbff-e733-4f79-bc74-7b6f663e2c37!
	
	
	==> storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] <==
	I0912 23:01:32.503844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0912 23:02:02.510510       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-378112 -n embed-certs-378112
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-378112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kvpqz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz: exit status 1 (83.321735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kvpqz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-378112 describe pod metrics-server-6867b74b74-kvpqz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (456.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:25:29.718369035 +0000 UTC m=+6996.566751963
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-380092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-380092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (84.906475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-380092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-380092 logs -n 25
E0912 23:25:30.468659   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-380092 logs -n 25: (1.963805774s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo cat                    | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo cat                    | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo cat                    | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-938961 sudo                        | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-938961                             | custom-flannel-938961 | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	| start   | -p bridge-938961 --memory=3072                       | bridge-938961         | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p flannel-938961 pgrep -a                           | flannel-938961        | jenkins | v1.34.0 | 12 Sep 24 23:25 UTC | 12 Sep 24 23:25 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:25:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:25:14.868701   77120 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:25:14.868959   77120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:25:14.868970   77120 out.go:358] Setting ErrFile to fd 2...
	I0912 23:25:14.868975   77120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:25:14.869257   77120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:25:14.869954   77120 out.go:352] Setting JSON to false
	I0912 23:25:14.871118   77120 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7657,"bootTime":1726175858,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:25:14.871180   77120 start.go:139] virtualization: kvm guest
	I0912 23:25:14.872884   77120 out.go:177] * [bridge-938961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:25:14.875322   77120 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:25:14.875328   77120 notify.go:220] Checking for updates...
	I0912 23:25:14.877358   77120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:25:14.878907   77120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:25:14.880361   77120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:25:14.881827   77120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:25:14.883335   77120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:25:14.885428   77120 config.go:182] Loaded profile config "enable-default-cni-938961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:25:14.885541   77120 config.go:182] Loaded profile config "flannel-938961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:25:14.885654   77120 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:25:14.885788   77120 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:25:14.928401   77120 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 23:25:14.930063   77120 start.go:297] selected driver: kvm2
	I0912 23:25:14.930085   77120 start.go:901] validating driver "kvm2" against <nil>
	I0912 23:25:14.930099   77120 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:25:14.930947   77120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:25:14.931049   77120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:25:14.947036   77120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:25:14.947088   77120 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 23:25:14.947329   77120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:25:14.947398   77120 cni.go:84] Creating CNI manager for "bridge"
	I0912 23:25:14.947413   77120 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 23:25:14.947481   77120 start.go:340] cluster config:
	{Name:bridge-938961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:25:14.947595   77120 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:25:14.949274   77120 out.go:177] * Starting "bridge-938961" primary control-plane node in "bridge-938961" cluster
	I0912 23:25:11.947851   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:11.948381   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find current IP address of domain enable-default-cni-938961 in network mk-enable-default-cni-938961
	I0912 23:25:11.948413   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | I0912 23:25:11.948300   75597 retry.go:31] will retry after 2.062363522s: waiting for machine to come up
	I0912 23:25:14.011984   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:14.038349   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find current IP address of domain enable-default-cni-938961 in network mk-enable-default-cni-938961
	I0912 23:25:14.038400   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | I0912 23:25:14.038277   75597 retry.go:31] will retry after 2.34964389s: waiting for machine to come up
	I0912 23:25:12.383949   73683 node_ready.go:53] node "flannel-938961" has status "Ready":"False"
	I0912 23:25:14.384476   73683 node_ready.go:53] node "flannel-938961" has status "Ready":"False"
	I0912 23:25:16.388062   73683 node_ready.go:53] node "flannel-938961" has status "Ready":"False"
	I0912 23:25:14.951087   77120 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:25:14.951129   77120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 23:25:14.951141   77120 cache.go:56] Caching tarball of preloaded images
	I0912 23:25:14.951247   77120 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 23:25:14.951266   77120 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 23:25:14.951422   77120 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/bridge-938961/config.json ...
	I0912 23:25:14.951451   77120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/bridge-938961/config.json: {Name:mk595a0415f4cd8d6776770ec09be23bd8737dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:25:14.951622   77120 start.go:360] acquireMachinesLock for bridge-938961: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:25:17.383807   73683 node_ready.go:49] node "flannel-938961" has status "Ready":"True"
	I0912 23:25:17.383832   73683 node_ready.go:38] duration metric: took 9.004133669s for node "flannel-938961" to be "Ready" ...
	I0912 23:25:17.383842   73683 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:25:17.390784   73683 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-bb6x9" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.398032   73683 pod_ready.go:103] pod "coredns-7c65d6cfc9-bb6x9" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:19.899107   73683 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb6x9" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:19.899129   73683 pod_ready.go:82] duration metric: took 2.508317509s for pod "coredns-7c65d6cfc9-bb6x9" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.899140   73683 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.906187   73683 pod_ready.go:93] pod "etcd-flannel-938961" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:19.906208   73683 pod_ready.go:82] duration metric: took 7.059817ms for pod "etcd-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.906219   73683 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.913388   73683 pod_ready.go:93] pod "kube-apiserver-flannel-938961" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:19.913410   73683 pod_ready.go:82] duration metric: took 7.183895ms for pod "kube-apiserver-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.913422   73683 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.918242   73683 pod_ready.go:93] pod "kube-controller-manager-flannel-938961" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:19.918263   73683 pod_ready.go:82] duration metric: took 4.834209ms for pod "kube-controller-manager-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.918274   73683 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kw6bv" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.922807   73683 pod_ready.go:93] pod "kube-proxy-kw6bv" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:19.922826   73683 pod_ready.go:82] duration metric: took 4.544784ms for pod "kube-proxy-kw6bv" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:19.922835   73683 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:20.294837   73683 pod_ready.go:93] pod "kube-scheduler-flannel-938961" in "kube-system" namespace has status "Ready":"True"
	I0912 23:25:20.294861   73683 pod_ready.go:82] duration metric: took 372.018236ms for pod "kube-scheduler-flannel-938961" in "kube-system" namespace to be "Ready" ...
	I0912 23:25:20.294875   73683 pod_ready.go:39] duration metric: took 2.911004845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:25:20.294891   73683 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:25:20.294948   73683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:25:20.309869   73683 api_server.go:72] duration metric: took 12.353302311s to wait for apiserver process to appear ...
	I0912 23:25:20.309894   73683 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:25:20.309916   73683 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0912 23:25:20.314838   73683 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0912 23:25:20.315788   73683 api_server.go:141] control plane version: v1.31.1
	I0912 23:25:20.315812   73683 api_server.go:131] duration metric: took 5.910477ms to wait for apiserver health ...
	I0912 23:25:20.315823   73683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:25:20.497669   73683 system_pods.go:59] 7 kube-system pods found
	I0912 23:25:20.497714   73683 system_pods.go:61] "coredns-7c65d6cfc9-bb6x9" [4d196933-9a33-4b5c-a6e9-838de78b6e36] Running
	I0912 23:25:20.497725   73683 system_pods.go:61] "etcd-flannel-938961" [9c622192-4f09-41ab-b27d-2f6a204879bc] Running
	I0912 23:25:20.497731   73683 system_pods.go:61] "kube-apiserver-flannel-938961" [4eb13c14-7c48-4f10-b601-83eee7cd8e15] Running
	I0912 23:25:20.497736   73683 system_pods.go:61] "kube-controller-manager-flannel-938961" [9b86dfe2-ee43-44ff-9f0d-d19f04729967] Running
	I0912 23:25:20.497742   73683 system_pods.go:61] "kube-proxy-kw6bv" [f6a81168-20bf-4f8a-bc13-599a92ea4584] Running
	I0912 23:25:20.497750   73683 system_pods.go:61] "kube-scheduler-flannel-938961" [108a190d-8f63-40d7-89ef-24724a817dac] Running
	I0912 23:25:20.497759   73683 system_pods.go:61] "storage-provisioner" [e43ed8d5-92ca-45a4-95b5-8894eabebd93] Running
	I0912 23:25:20.497770   73683 system_pods.go:74] duration metric: took 181.94046ms to wait for pod list to return data ...
	I0912 23:25:20.497782   73683 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:25:20.695643   73683 default_sa.go:45] found service account: "default"
	I0912 23:25:20.695672   73683 default_sa.go:55] duration metric: took 197.879821ms for default service account to be created ...
	I0912 23:25:20.695682   73683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:25:20.897712   73683 system_pods.go:86] 7 kube-system pods found
	I0912 23:25:20.897739   73683 system_pods.go:89] "coredns-7c65d6cfc9-bb6x9" [4d196933-9a33-4b5c-a6e9-838de78b6e36] Running
	I0912 23:25:20.897744   73683 system_pods.go:89] "etcd-flannel-938961" [9c622192-4f09-41ab-b27d-2f6a204879bc] Running
	I0912 23:25:20.897749   73683 system_pods.go:89] "kube-apiserver-flannel-938961" [4eb13c14-7c48-4f10-b601-83eee7cd8e15] Running
	I0912 23:25:20.897752   73683 system_pods.go:89] "kube-controller-manager-flannel-938961" [9b86dfe2-ee43-44ff-9f0d-d19f04729967] Running
	I0912 23:25:20.897756   73683 system_pods.go:89] "kube-proxy-kw6bv" [f6a81168-20bf-4f8a-bc13-599a92ea4584] Running
	I0912 23:25:20.897760   73683 system_pods.go:89] "kube-scheduler-flannel-938961" [108a190d-8f63-40d7-89ef-24724a817dac] Running
	I0912 23:25:20.897763   73683 system_pods.go:89] "storage-provisioner" [e43ed8d5-92ca-45a4-95b5-8894eabebd93] Running
	I0912 23:25:20.897769   73683 system_pods.go:126] duration metric: took 202.081353ms to wait for k8s-apps to be running ...
	I0912 23:25:20.897777   73683 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:25:20.897814   73683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:25:20.913211   73683 system_svc.go:56] duration metric: took 15.42803ms WaitForService to wait for kubelet
	I0912 23:25:20.913242   73683 kubeadm.go:582] duration metric: took 12.956676621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:25:20.913291   73683 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:25:21.095579   73683 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:25:21.095610   73683 node_conditions.go:123] node cpu capacity is 2
	I0912 23:25:21.095625   73683 node_conditions.go:105] duration metric: took 182.324428ms to run NodePressure ...
	I0912 23:25:21.095640   73683 start.go:241] waiting for startup goroutines ...
	I0912 23:25:21.095650   73683 start.go:246] waiting for cluster config update ...
	I0912 23:25:21.095662   73683 start.go:255] writing updated cluster config ...
	I0912 23:25:21.095964   73683 ssh_runner.go:195] Run: rm -f paused
	I0912 23:25:21.143044   73683 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:25:21.145577   73683 out.go:177] * Done! kubectl is now configured to use "flannel-938961" cluster and "default" namespace by default
	I0912 23:25:16.390801   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:16.391310   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find current IP address of domain enable-default-cni-938961 in network mk-enable-default-cni-938961
	I0912 23:25:16.391351   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | I0912 23:25:16.391241   75597 retry.go:31] will retry after 3.777325323s: waiting for machine to come up
	I0912 23:25:20.172511   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:20.173037   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find current IP address of domain enable-default-cni-938961 in network mk-enable-default-cni-938961
	I0912 23:25:20.173065   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | I0912 23:25:20.172995   75597 retry.go:31] will retry after 4.639331652s: waiting for machine to come up
	I0912 23:25:24.814761   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:24.815230   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has current primary IP address 192.168.72.237 and MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:24.815263   75574 main.go:141] libmachine: (enable-default-cni-938961) Found IP for machine: 192.168.72.237
	I0912 23:25:24.815274   75574 main.go:141] libmachine: (enable-default-cni-938961) Reserving static IP address...
	I0912 23:25:24.815628   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-938961", mac: "52:54:00:24:ec:69", ip: "192.168.72.237"} in network mk-enable-default-cni-938961
	I0912 23:25:24.896120   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | Getting to WaitForSSH function...
	I0912 23:25:24.896153   75574 main.go:141] libmachine: (enable-default-cni-938961) Reserved static IP address: 192.168.72.237
	I0912 23:25:24.896168   75574 main.go:141] libmachine: (enable-default-cni-938961) Waiting for SSH to be available...
	I0912 23:25:24.899681   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | domain enable-default-cni-938961 has defined MAC address 52:54:00:24:ec:69 in network mk-enable-default-cni-938961
	I0912 23:25:24.900125   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:24:ec:69", ip: ""} in network mk-enable-default-cni-938961
	I0912 23:25:24.900152   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | unable to find defined IP address of network mk-enable-default-cni-938961 interface with MAC address 52:54:00:24:ec:69
	I0912 23:25:24.900340   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | Using SSH client type: external
	I0912 23:25:24.900368   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/enable-default-cni-938961/id_rsa (-rw-------)
	I0912 23:25:24.900395   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/enable-default-cni-938961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:25:24.900413   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | About to run SSH command:
	I0912 23:25:24.900426   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | exit 0
	I0912 23:25:24.904084   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | SSH cmd err, output: exit status 255: 
	I0912 23:25:24.904107   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0912 23:25:24.904115   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | command : exit 0
	I0912 23:25:24.904121   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | err     : exit status 255
	I0912 23:25:24.904129   75574 main.go:141] libmachine: (enable-default-cni-938961) DBG | output  : 
	I0912 23:25:29.363433   77120 start.go:364] duration metric: took 14.411777822s to acquireMachinesLock for "bridge-938961"
	I0912 23:25:29.363502   77120 start.go:93] Provisioning new machine with config: &{Name:bridge-938961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:bridge-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:25:29.363649   77120 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 23:25:29.365843   77120 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 23:25:29.366088   77120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:25:29.366145   77120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:25:29.384571   77120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0912 23:25:29.385125   77120 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:25:29.385815   77120 main.go:141] libmachine: Using API Version  1
	I0912 23:25:29.385838   77120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:25:29.386236   77120 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:25:29.386446   77120 main.go:141] libmachine: (bridge-938961) Calling .GetMachineName
	I0912 23:25:29.386608   77120 main.go:141] libmachine: (bridge-938961) Calling .DriverName
	I0912 23:25:29.386773   77120 start.go:159] libmachine.API.Create for "bridge-938961" (driver="kvm2")
	I0912 23:25:29.386802   77120 client.go:168] LocalClient.Create starting
	I0912 23:25:29.386834   77120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem
	I0912 23:25:29.386876   77120 main.go:141] libmachine: Decoding PEM data...
	I0912 23:25:29.386904   77120 main.go:141] libmachine: Parsing certificate...
	I0912 23:25:29.386975   77120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem
	I0912 23:25:29.387002   77120 main.go:141] libmachine: Decoding PEM data...
	I0912 23:25:29.387020   77120 main.go:141] libmachine: Parsing certificate...
	I0912 23:25:29.387040   77120 main.go:141] libmachine: Running pre-create checks...
	I0912 23:25:29.387054   77120 main.go:141] libmachine: (bridge-938961) Calling .PreCreateCheck
	I0912 23:25:29.387505   77120 main.go:141] libmachine: (bridge-938961) Calling .GetConfigRaw
	I0912 23:25:29.388037   77120 main.go:141] libmachine: Creating machine...
	I0912 23:25:29.388061   77120 main.go:141] libmachine: (bridge-938961) Calling .Create
	I0912 23:25:29.388201   77120 main.go:141] libmachine: (bridge-938961) Creating KVM machine...
	I0912 23:25:29.389697   77120 main.go:141] libmachine: (bridge-938961) DBG | found existing default KVM network
	I0912 23:25:29.391461   77120 main.go:141] libmachine: (bridge-938961) DBG | I0912 23:25:29.391272   77290 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9c:64:db} reservation:<nil>}
	I0912 23:25:29.392478   77120 main.go:141] libmachine: (bridge-938961) DBG | I0912 23:25:29.392363   77290 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:35:78:da} reservation:<nil>}
	I0912 23:25:29.394014   77120 main.go:141] libmachine: (bridge-938961) DBG | I0912 23:25:29.393888   77290 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000113c20}
	I0912 23:25:29.394040   77120 main.go:141] libmachine: (bridge-938961) DBG | created network xml: 
	I0912 23:25:29.394052   77120 main.go:141] libmachine: (bridge-938961) DBG | <network>
	I0912 23:25:29.394064   77120 main.go:141] libmachine: (bridge-938961) DBG |   <name>mk-bridge-938961</name>
	I0912 23:25:29.394077   77120 main.go:141] libmachine: (bridge-938961) DBG |   <dns enable='no'/>
	I0912 23:25:29.394084   77120 main.go:141] libmachine: (bridge-938961) DBG |   
	I0912 23:25:29.394093   77120 main.go:141] libmachine: (bridge-938961) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0912 23:25:29.394107   77120 main.go:141] libmachine: (bridge-938961) DBG |     <dhcp>
	I0912 23:25:29.394183   77120 main.go:141] libmachine: (bridge-938961) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0912 23:25:29.394210   77120 main.go:141] libmachine: (bridge-938961) DBG |     </dhcp>
	I0912 23:25:29.394270   77120 main.go:141] libmachine: (bridge-938961) DBG |   </ip>
	I0912 23:25:29.394298   77120 main.go:141] libmachine: (bridge-938961) DBG |   
	I0912 23:25:29.394310   77120 main.go:141] libmachine: (bridge-938961) DBG | </network>
	I0912 23:25:29.394323   77120 main.go:141] libmachine: (bridge-938961) DBG | 
	I0912 23:25:29.401074   77120 main.go:141] libmachine: (bridge-938961) DBG | trying to create private KVM network mk-bridge-938961 192.168.61.0/24...
	I0912 23:25:29.497927   77120 main.go:141] libmachine: (bridge-938961) Setting up store path in /home/jenkins/minikube-integration/19616-5891/.minikube/machines/bridge-938961 ...
	I0912 23:25:29.497967   77120 main.go:141] libmachine: (bridge-938961) Building disk image from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 23:25:29.497991   77120 main.go:141] libmachine: (bridge-938961) DBG | private KVM network mk-bridge-938961 192.168.61.0/24 created
	I0912 23:25:29.498003   77120 main.go:141] libmachine: (bridge-938961) DBG | I0912 23:25:29.497769   77290 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:25:29.498061   77120 main.go:141] libmachine: (bridge-938961) Downloading /home/jenkins/minikube-integration/19616-5891/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0912 23:25:29.765920   77120 main.go:141] libmachine: (bridge-938961) DBG | I0912 23:25:29.765775   77290 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/bridge-938961/id_rsa...
	
	
	==> CRI-O <==
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.460966384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183530460931790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=369d8477-dab9-491a-a671-934681dae1fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.461832703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcbb525f-15a2-4b54-88d3-285f3fe916ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.461919478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcbb525f-15a2-4b54-88d3-285f3fe916ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.462260550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcbb525f-15a2-4b54-88d3-285f3fe916ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.511377316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d180332-d16c-4de5-994a-4d507d958757 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.511448829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d180332-d16c-4de5-994a-4d507d958757 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.513406085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87a03fe1-95c0-4332-9dfe-ba470019dc8a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.514184747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183530514157756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87a03fe1-95c0-4332-9dfe-ba470019dc8a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.514933142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9bde998-24be-4b09-9e21-829b971083cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.515190885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9bde998-24be-4b09-9e21-829b971083cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.515644025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9bde998-24be-4b09-9e21-829b971083cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.564363311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=025ecb45-daf9-4844-83dd-42d3b0860e29 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.564451609Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=025ecb45-daf9-4844-83dd-42d3b0860e29 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.566146383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=586bddc9-5cfd-4065-b59a-e059cc7bce27 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.566796421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183530566761214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=586bddc9-5cfd-4065-b59a-e059cc7bce27 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.567724884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=915916e4-a6be-4354-b720-11e6c24dd49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.567798521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=915916e4-a6be-4354-b720-11e6c24dd49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.568128117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=915916e4-a6be-4354-b720-11e6c24dd49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.621008424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9216fe58-0e8c-4d09-a4c6-acdc0f4f5cd0 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.621114702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9216fe58-0e8c-4d09-a4c6-acdc0f4f5cd0 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.623019980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a02e2799-2433-4e62-b141-6553c0f4866b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.623784823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183530623500695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a02e2799-2433-4e62-b141-6553c0f4866b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.624713199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e73448a5-8070-4e21-b2cc-7fd2da36b89f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.624787495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e73448a5-8070-4e21-b2cc-7fd2da36b89f name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:25:30 no-preload-380092 crio[704]: time="2024-09-12 23:25:30.625073049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182210356961708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d2878,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47feca4846d958586814293099955bcc8353124c34ec4bde8012da2a0564bf3,PodSandboxId:b3b07d8fb160c889b6c8bff184a5c37a69d0f4fcadb25c2858d711ac86ffb972,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726182190213252979,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d6e0a88-c74b-4cce-b218-5f7cdb45fc70,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189,PodSandboxId:75dad9f5541516fbf87a8c6de9e222111e9e4a3ca4b5e8d16e98a9d2f4124940,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182186838585504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-twck7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb00aff-8a30-4634-a804-1419eabfe727,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37,PodSandboxId:566addd15dd3e980d75c0a8ea07a3a85983efac22937a4610e066d0c3629c849,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726182179476007740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4rcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17caa2e-d0fe-45e8-a9
6c-d1cc1b55e665,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a,PodSandboxId:88a25c57dc5657c04a7eefc946b1a9f50aca508e69469eb9cf99c0b62934957b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726182179439405223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f173a1f6-3772-4f08-8e40-2215cc9d28
78,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726182179188564101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8
ddd7a8b93f476624,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29,PodSandboxId:9b9c63eaf40efa04bbffb16c44659876175e166a2f14a629e990220fd1036e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182170642979462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dbad2d5a7cd172ad5c2fef02d4f2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726182160125464160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3,PodSandboxId:febe058d23f6428b75af213b96a1101fcf865f3cdac48508a664d37b9bee26e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182138690142656,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a09dcc580279d4b8f7494570bf7f82a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7,PodSandboxId:bc4e3cf733a3ead6997642f5626c451468c996cff98a0085021a02e15161622a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726182138634173526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1efe73ad279e8ddd7a8b93f476624,},Annotations:map[string]string{io.kuber
netes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec,PodSandboxId:cde6a78bdfb6c3b4e3629acccff0cc9698404a5fda13b87c5643c62d19cad503,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726182138606250790,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-380092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3192af1ac01d559c47e957931bf1bc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e73448a5-8070-4e21-b2cc-7fd2da36b89f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d117ed77ba5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       3                   88a25c57dc565       storage-provisioner
	e47feca4846d9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   b3b07d8fb160c       busybox
	e59d289c9afef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago      Running             coredns                   1                   75dad9f554151       coredns-7c65d6cfc9-twck7
	4c48075599101       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   566addd15dd3e       kube-proxy-z4rcx
	d40483dfc6594       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   88a25c57dc565       storage-provisioner
	eb473fa0b2d91       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   2                   bc4e3cf733a3e       kube-controller-manager-no-preload-380092
	35282e97473f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   9b9c63eaf40ef       etcd-no-preload-380092
	3c73944a51041       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            2                   febe058d23f64       kube-apiserver-no-preload-380092
	00f124dff0f77       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      23 minutes ago      Exited              kube-apiserver            1                   febe058d23f64       kube-apiserver-no-preload-380092
	635fd2c2a6dd2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      23 minutes ago      Exited              kube-controller-manager   1                   bc4e3cf733a3e       kube-controller-manager-no-preload-380092
	3187fdef2bd31       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      23 minutes ago      Running             kube-scheduler            1                   cde6a78bdfb6c       kube-scheduler-no-preload-380092
	
	
	==> coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35582 - 58395 "HINFO IN 7798790501937056755.3744919700464143285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012134309s
	
	
	==> describe nodes <==
	Name:               no-preload-380092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-380092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=no-preload-380092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_56_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:56:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-380092
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:25:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:23:44 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:23:44 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:23:44 +0000   Thu, 12 Sep 2024 22:56:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:23:44 +0000   Thu, 12 Sep 2024 23:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.253
	  Hostname:    no-preload-380092
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b588c397551428f813ee867d317e221
	  System UUID:                0b588c39-7551-428f-813e-e867d317e221
	  Boot ID:                    2c55225c-09f7-400c-8d96-cd46f6eb1084
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-7c65d6cfc9-twck7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-380092                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-380092             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-380092    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-z4rcx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-380092             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-4v7f5              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-380092 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-380092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-380092 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node no-preload-380092 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-380092 event: Registered Node no-preload-380092 in Controller
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-380092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-380092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-380092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-380092 event: Registered Node no-preload-380092 in Controller
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052672] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942555] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.806223] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.364826] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.422725] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.057848] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071599] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.216811] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.120267] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.298945] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[Sep12 23:02] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.061075] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.993787] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +4.140445] kauditd_printk_skb: 87 callbacks suppressed
	[Sep12 23:03] systemd-fstab-generator[2082]: Ignoring "noauto" option for root device
	[  +2.373199] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.972776] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] <==
	{"level":"info","ts":"2024-09-12T23:02:52.386869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:02:52.387953Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:02:52.388088Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:02:52.389031Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T23:02:52.389110Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.253:2379"}
	{"level":"info","ts":"2024-09-12T23:02:52.389232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T23:02:52.389254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T23:12:57.255033Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":863}
	{"level":"info","ts":"2024-09-12T23:12:57.264439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":863,"took":"9.087041ms","hash":3567789701,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2756608,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-12T23:12:57.264509Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3567789701,"revision":863,"compact-revision":-1}
	{"level":"info","ts":"2024-09-12T23:17:57.265775Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1105}
	{"level":"info","ts":"2024-09-12T23:17:57.269864Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1105,"took":"3.740664ms","hash":1018279237,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-12T23:17:57.269920Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1018279237,"revision":1105,"compact-revision":863}
	{"level":"info","ts":"2024-09-12T23:22:57.276301Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1347}
	{"level":"info","ts":"2024-09-12T23:22:57.280285Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1347,"took":"3.613207ms","hash":4211921065,"current-db-size-bytes":2756608,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-12T23:22:57.280343Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4211921065,"revision":1347,"compact-revision":1105}
	{"level":"info","ts":"2024-09-12T23:23:07.940375Z","caller":"traceutil/trace.go:171","msg":"trace[907016994] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"206.402413ms","start":"2024-09-12T23:23:07.733942Z","end":"2024-09-12T23:23:07.940344Z","steps":["trace[907016994] 'process raft request'  (duration: 206.238685ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:23:08.120240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.070302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:23:08.120509Z","caller":"traceutil/trace.go:171","msg":"trace[166391422] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1600; }","duration":"110.410366ms","start":"2024-09-12T23:23:08.010061Z","end":"2024-09-12T23:23:08.120471Z","steps":["trace[166391422] 'count revisions from in-memory index tree'  (duration: 109.949527ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T23:23:10.056377Z","caller":"traceutil/trace.go:171","msg":"trace[222801721] transaction","detail":"{read_only:false; response_revision:1601; number_of_response:1; }","duration":"108.262832ms","start":"2024-09-12T23:23:09.948095Z","end":"2024-09-12T23:23:10.056358Z","steps":["trace[222801721] 'process raft request'  (duration: 108.122517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:23:35.533289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.286677ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2383972005421564222 > lease_revoke:<id:211591e879a0c0dd>","response":"size:29"}
	{"level":"warn","ts":"2024-09-12T23:24:05.753419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.996912ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T23:24:05.753691Z","caller":"traceutil/trace.go:171","msg":"trace[1804218343] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1646; }","duration":"198.269053ms","start":"2024-09-12T23:24:05.555379Z","end":"2024-09-12T23:24:05.753648Z","steps":["trace[1804218343] 'range keys from in-memory index tree'  (duration: 197.970854ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T23:24:05.753465Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.119757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2383972005421564406 > lease_revoke:<id:211591e879a0c190>","response":"size:29"}
	{"level":"info","ts":"2024-09-12T23:24:06.189139Z","caller":"traceutil/trace.go:171","msg":"trace[2099733253] transaction","detail":"{read_only:false; response_revision:1647; number_of_response:1; }","duration":"251.805106ms","start":"2024-09-12T23:24:05.937314Z","end":"2024-09-12T23:24:06.189119Z","steps":["trace[2099733253] 'process raft request'  (duration: 251.678457ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:25:31 up 23 min,  0 users,  load average: 0.10, 0.12, 0.10
	Linux no-preload-380092 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] <==
	I0912 23:02:18.960952       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:02:19.331162       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0912 23:02:19.339708       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:19.340367       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0912 23:02:19.367591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0912 23:02:19.374745       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0912 23:02:19.376550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0912 23:02:19.376863       1 instance.go:232] Using reconciler: lease
	W0912 23:02:19.379719       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.340789       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.340851       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:20.380336       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:21.703220       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:21.988583       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:22.058333       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:23.981238       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:24.285985       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:25.038974       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:27.721483       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:29.182817       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:29.185391       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:33.754598       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:34.852824       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:02:36.339368       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0912 23:02:39.378001       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] <==
	I0912 23:20:59.586587       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:20:59.586625       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:22:58.584087       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:22:58.584454       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0912 23:22:59.586673       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:22:59.586742       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0912 23:22:59.586779       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:22:59.586952       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:22:59.588027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:22:59.588035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:23:59.588634       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:23:59.588713       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0912 23:23:59.588788       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:23:59.588849       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:23:59.590033       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:23:59.590100       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] <==
	I0912 23:02:19.302013       1 serving.go:386] Generated self-signed cert in-memory
	I0912 23:02:19.806963       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0912 23:02:19.807056       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:02:19.808585       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0912 23:02:19.808653       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 23:02:19.808779       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 23:02:19.808984       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0912 23:02:58.503407       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] <==
	E0912 23:20:02.565383       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:03.092900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:20:32.571869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:33.101940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:02.578333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:03.110842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:32.584241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:33.117583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:22:02.591302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:22:03.126461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:22:32.597237       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:22:33.133887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:23:02.604727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:23:03.144339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:23:32.616493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:23:33.155148       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:23:44.096497       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-380092"
	E0912 23:24:02.624245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:24:03.165154       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:24:32.631775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:24:33.172288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:24:41.053100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="348.959µs"
	I0912 23:24:56.053864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="128.284µs"
	E0912 23:25:02.640036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:25:03.185396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:03:00.348025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:03:00.379307       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.253"]
	E0912 23:03:00.379468       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:03:00.459620       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:03:00.459714       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:03:00.459749       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:03:00.462665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:03:00.463202       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:03:00.463255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:03:00.487251       1 config.go:199] "Starting service config controller"
	I0912 23:03:00.488361       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:03:00.489041       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:03:00.489877       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:03:00.490144       1 config.go:328] "Starting node config controller"
	I0912 23:03:00.490197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:03:00.590617       1 shared_informer.go:320] Caches are synced for node config
	I0912 23:03:00.590661       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:03:00.590760       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] <==
	W0912 23:02:58.508680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:02:58.521626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 23:02:58.521991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 23:02:58.522143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 23:02:58.522434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.508958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.522889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.523644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.523816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 23:02:58.523943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 23:02:58.524259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 23:02:58.524361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 23:02:58.524468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:02:58.509450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 23:02:58.524500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 23:03:00.099613       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:24:28 no-preload-380092 kubelet[1354]: E0912 23:24:28.050225    1354 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kchmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-4v7f5_kube-system(10c8c536-9ca6-4e75-96f2-7324f3d3d379): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 12 23:24:28 no-preload-380092 kubelet[1354]: E0912 23:24:28.051885    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:24:28 no-preload-380092 kubelet[1354]: E0912 23:24:28.358270    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183468356485294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:28 no-preload-380092 kubelet[1354]: E0912 23:24:28.359089    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183468356485294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:38 no-preload-380092 kubelet[1354]: E0912 23:24:38.360492    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183478360148432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:38 no-preload-380092 kubelet[1354]: E0912 23:24:38.360621    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183478360148432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:41 no-preload-380092 kubelet[1354]: E0912 23:24:41.033574    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:24:48 no-preload-380092 kubelet[1354]: E0912 23:24:48.362655    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183488362319268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:48 no-preload-380092 kubelet[1354]: E0912 23:24:48.362724    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183488362319268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:56 no-preload-380092 kubelet[1354]: E0912 23:24:56.033290    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:24:58 no-preload-380092 kubelet[1354]: E0912 23:24:58.365736    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183498365220624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:24:58 no-preload-380092 kubelet[1354]: E0912 23:24:58.366197    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183498365220624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:08 no-preload-380092 kubelet[1354]: E0912 23:25:08.372825    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183508370882830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:08 no-preload-380092 kubelet[1354]: E0912 23:25:08.373504    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183508370882830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:10 no-preload-380092 kubelet[1354]: E0912 23:25:10.033229    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]: E0912 23:25:18.050089    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]: E0912 23:25:18.375240    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183518374908457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:18 no-preload-380092 kubelet[1354]: E0912 23:25:18.375265    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183518374908457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:22 no-preload-380092 kubelet[1354]: E0912 23:25:22.033813    1354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4v7f5" podUID="10c8c536-9ca6-4e75-96f2-7324f3d3d379"
	Sep 12 23:25:28 no-preload-380092 kubelet[1354]: E0912 23:25:28.376779    1354 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183528376324536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:25:28 no-preload-380092 kubelet[1354]: E0912 23:25:28.377233    1354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183528376324536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] <==
	I0912 23:03:30.442505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:03:30.457600       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:03:30.457808       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:03:47.858919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:03:47.859070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983!
	I0912 23:03:47.864390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3827ca0e-7f06-42b4-b440-3352dbbaadc3", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983 became leader
	I0912 23:03:47.960117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-380092_9ae72ac6-a0ac-4b5c-a75c-7b86ec689983!
	
	
	==> storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] <==
	I0912 23:02:59.842121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0912 23:03:29.846488       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-380092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4v7f5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5: exit status 1 (98.109673ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4v7f5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-380092 describe pod metrics-server-6867b74b74-4v7f5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (336.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-12 23:22:13.374732501 +0000 UTC m=+6800.223115433
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-702201 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.858µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-702201 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-702201 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-702201 logs -n 25: (1.221475682s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	| start   | -p auto-938961 --memory=3072                           | auto-938961                  | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:22:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:22:10.713699   69283 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:22:10.713921   69283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:10.713929   69283 out.go:358] Setting ErrFile to fd 2...
	I0912 23:22:10.713933   69283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:10.714161   69283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:22:10.714929   69283 out.go:352] Setting JSON to false
	I0912 23:22:10.715997   69283 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7473,"bootTime":1726175858,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:22:10.716063   69283 start.go:139] virtualization: kvm guest
	I0912 23:22:10.719260   69283 out.go:177] * [auto-938961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:22:10.720508   69283 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:22:10.720556   69283 notify.go:220] Checking for updates...
	I0912 23:22:10.722976   69283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:22:10.724108   69283 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:22:10.725038   69283 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:22:10.725990   69283 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:22:10.727066   69283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:22:10.728706   69283 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:10.728838   69283 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:10.728964   69283 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:22:10.729068   69283 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:22:10.767785   69283 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 23:22:10.768955   69283 start.go:297] selected driver: kvm2
	I0912 23:22:10.768968   69283 start.go:901] validating driver "kvm2" against <nil>
	I0912 23:22:10.768980   69283 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:22:10.769686   69283 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:10.769755   69283 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:22:10.785678   69283 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:22:10.785748   69283 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 23:22:10.786038   69283 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:22:10.786076   69283 cni.go:84] Creating CNI manager for ""
	I0912 23:22:10.786086   69283 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:22:10.786100   69283 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 23:22:10.786168   69283 start.go:340] cluster config:
	{Name:auto-938961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:22:10.786279   69283 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:10.787914   69283 out.go:177] * Starting "auto-938961" primary control-plane node in "auto-938961" cluster
	I0912 23:22:10.788846   69283 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:22:10.788874   69283 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 23:22:10.788886   69283 cache.go:56] Caching tarball of preloaded images
	I0912 23:22:10.788958   69283 preload.go:172] Found /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 23:22:10.788971   69283 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0912 23:22:10.789074   69283 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/auto-938961/config.json ...
	I0912 23:22:10.789106   69283 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/auto-938961/config.json: {Name:mk57e6dabf862afff7fde4ad1df09aa40edd9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:10.789255   69283 start.go:360] acquireMachinesLock for auto-938961: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:22:10.789299   69283 start.go:364] duration metric: took 25.604µs to acquireMachinesLock for "auto-938961"
	I0912 23:22:10.789324   69283 start.go:93] Provisioning new machine with config: &{Name:auto-938961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-938961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:22:10.789418   69283 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Sep 12 23:22:13 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:13.975889825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183333975860116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=818b0211-a8b0-44a5-83ec-f4f2ac9a79b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:13 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:13.976403470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82b8bd8d-e913-4516-8c89-eb29fd638560 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:13 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:13.976461748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82b8bd8d-e913-4516-8c89-eb29fd638560 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:13 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:13.976857754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82b8bd8d-e913-4516-8c89-eb29fd638560 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.016375565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=381924e2-c072-46f5-abd0-4ddbfbdf075c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.016448602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=381924e2-c072-46f5-abd0-4ddbfbdf075c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.017923925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4c94c5d-f4f1-4c12-896c-c1a3047fe22a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.018330194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183334018305493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4c94c5d-f4f1-4c12-896c-c1a3047fe22a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.019038954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cb23ed7-3015-47b3-b846-b5adb75e9020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.019108327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cb23ed7-3015-47b3-b846-b5adb75e9020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.019672044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cb23ed7-3015-47b3-b846-b5adb75e9020 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.062769749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a57a3b27-23db-4e34-a691-14c8628a3571 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.062890873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a57a3b27-23db-4e34-a691-14c8628a3571 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.064385325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcea232f-4e26-4150-b70d-77d3077ec327 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.064949213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183334064923311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcea232f-4e26-4150-b70d-77d3077ec327 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.065592724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d340f388-30df-41a4-b4cb-1c0722044b4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.065665481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d340f388-30df-41a4-b4cb-1c0722044b4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.065911034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d340f388-30df-41a4-b4cb-1c0722044b4e name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.098285946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=842878c5-09ea-484d-9fae-e49f13209f8f name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.098358122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=842878c5-09ea-484d-9fae-e49f13209f8f name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.099225734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b8f2393-d36e-4dbe-a9fe-7797047dc19c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.099717186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183334099691290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b8f2393-d36e-4dbe-a9fe-7797047dc19c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.100260905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=832ca1ef-8d4d-4d5c-89df-5eedfcbeec88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.100317144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=832ca1ef-8d4d-4d5c-89df-5eedfcbeec88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:14 default-k8s-diff-port-702201 crio[675]: time="2024-09-12 23:22:14.100518612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919,PodSandboxId:cd1f45061a9f43ac4a43b719885af71ec2cbde1be4f7bc6bbfd6782319a32242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726182446067649087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66bc6f77-b774-4478-80d0-a1027802e179,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab,PodSandboxId:723f2e0c6feebc367313a6e95d3f3def14527e2f5cc8e278357499d68f091c6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182446081370755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-f5spz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a0f69e9-66eb-4e59-a173-1d6f638e2211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93,PodSandboxId:ff0416be2d8f6ea4cfdb4c4f58c9fc79a8e8636ea75cb96cc486a18fea87a2de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726182445908633772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhbgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0af4199f-b09c-4ab8-8170-b8941d3ece7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c,PodSandboxId:bd0f2307e697fa09018da3eb0a93c51f92d164a3259bcc557fb83103bb3c018f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726182445280631876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mv8ws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cb20c3-8445-4ce9-8484-5138f3d0ed57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3,PodSandboxId:485522c01c095e00180f0d0841b5c584e28fee37565988b2ad60c2702ecfc43b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172618243420523124
3,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43028e788886f74e0519634e413ab4c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd,PodSandboxId:d7cbf207c6b9c78938a79fce04721431590f37869b08eb550ef72b7ea78da905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17261824342
01938076,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dda44320478814b6fd88ddd2d5df796e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670,PodSandboxId:ebc86e8a7cafa9197f18c2f43d8ba55b0ff3fd39db7f32cb083b7001c14ffc26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726182434174712127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263,PodSandboxId:c968c24fe11a8a3dce3414cd1f543e14d1a8e725b63667f003ccfd588a8c8c3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726182
434144004604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88145041c3602cf15db12b393eabc4cc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d,PodSandboxId:e6ff569f1a42dabfb64a22e4f7e6fa83aa461619c6af1645de7802a4b31daf7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726182147468721483,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-702201,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd4a0e7905e7c213ee5ee3845aa51fb,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=832ca1ef-8d4d-4d5c-89df-5eedfcbeec88 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20706af79dcba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   723f2e0c6feeb       coredns-7c65d6cfc9-f5spz
	9417a075a215d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   cd1f45061a9f4       storage-provisioner
	c97784cdf55f3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   ff0416be2d8f6       coredns-7c65d6cfc9-qhbgf
	48f2900449cb2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   bd0f2307e697f       kube-proxy-mv8ws
	8e600ca01711f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   485522c01c095       kube-scheduler-default-k8s-diff-port-702201
	b71cf03e9cdba       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   d7cbf207c6b9c       kube-controller-manager-default-k8s-diff-port-702201
	427d0b9d288b2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   ebc86e8a7cafa       kube-apiserver-default-k8s-diff-port-702201
	9c658afc2ee1f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   c968c24fe11a8       etcd-default-k8s-diff-port-702201
	9585d55eb79b3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   e6ff569f1a42d       kube-apiserver-default-k8s-diff-port-702201
	
	
	==> coredns [20706af79dcbaa7d5887f8ef9d050c28cab70a7fe3ebeecf461b8bfd322783ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c97784cdf55f3986d87c6e305563900f3a96c2bba5062a0483f100c926085e93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-702201
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-702201
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=default-k8s-diff-port-702201
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 23:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-702201
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:22:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:17:42 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:17:42 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:17:42 +0000   Thu, 12 Sep 2024 23:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:17:42 +0000   Thu, 12 Sep 2024 23:07:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    default-k8s-diff-port-702201
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1296c84ac184068bb634b575db84e62
	  System UUID:                d1296c84-ac18-4068-bb63-4b575db84e62
	  Boot ID:                    c844185b-24b6-480f-b865-8643f988a7a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-f5spz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-qhbgf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-default-k8s-diff-port-702201                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-default-k8s-diff-port-702201             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-702201    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-mv8ws                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-702201             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-w2dvn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node default-k8s-diff-port-702201 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node default-k8s-diff-port-702201 event: Registered Node default-k8s-diff-port-702201 in Controller
	
	
	==> dmesg <==
	[  +0.051308] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.963271] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.998418] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573381] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.083325] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060016] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057841] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.207341] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.151332] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.312200] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.942546] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +1.790725] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.067278] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.541396] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.572445] kauditd_printk_skb: 85 callbacks suppressed
	[Sep12 23:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.582149] systemd-fstab-generator[2517]: Ignoring "noauto" option for root device
	[  +4.383574] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.663713] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +4.901127] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +0.097232] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.959392] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [9c658afc2ee1fb91c89cafc962fb5892d95d31210a1eca7b2568040858991263] <==
	{"level":"info","ts":"2024-09-12T23:07:14.429011Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9910392473c15cf3","initial-advertise-peer-urls":["https://192.168.39.214:2380"],"listen-peer-urls":["https://192.168.39.214:2380"],"advertise-client-urls":["https://192.168.39.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T23:07:14.429035Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T23:07:14.583858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.583964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.584050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgPreVoteResp from 9910392473c15cf3 at term 1"}
	{"level":"info","ts":"2024-09-12T23:07:14.584087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 received MsgVoteResp from 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9910392473c15cf3 became leader at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.584165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9910392473c15cf3 elected leader 9910392473c15cf3 at term 2"}
	{"level":"info","ts":"2024-09-12T23:07:14.586226Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9910392473c15cf3","local-member-attributes":"{Name:default-k8s-diff-port-702201 ClientURLs:[https://192.168.39.214:2379]}","request-path":"/0/members/9910392473c15cf3/attributes","cluster-id":"437e955a662fe33","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T23:07:14.586300Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:07:14.586707Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.588367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T23:07:14.590645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T23:07:14.590678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T23:07:14.591264Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:07:14.592051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.214:2379"}
	{"level":"info","ts":"2024-09-12T23:07:14.597749Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T23:07:14.607688Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"437e955a662fe33","local-member-id":"9910392473c15cf3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.607820Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.607870Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T23:07:14.616597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T23:17:15.010743Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-09-12T23:17:15.019479Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"8.418475ms","hash":4176962314,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-12T23:17:15.019593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4176962314,"revision":683,"compact-revision":-1}
	
	
	==> kernel <==
	 23:22:14 up 20 min,  0 users,  load average: 0.12, 0.19, 0.17
	Linux default-k8s-diff-port-702201 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [427d0b9d288b2b76c528c890623d31727060834c9aa26564bbe690b6b1f82670] <==
	E0912 23:17:17.973714       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0912 23:17:17.973811       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:17:17.975033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:17:17.975106       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:18:17.976065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:18:17.976159       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0912 23:18:17.976083       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:18:17.976251       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0912 23:18:17.977388       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:18:17.977431       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0912 23:20:17.978011       1 handler_proxy.go:99] no RequestInfo found in the context
	W0912 23:20:17.978041       1 handler_proxy.go:99] no RequestInfo found in the context
	E0912 23:20:17.978266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0912 23:20:17.978309       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0912 23:20:17.979574       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:20:17.979572       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9585d55eb79b377ad2e35b4ff9f7f963cdf06188855e938f8db345f378246c5d] <==
	W0912 23:07:07.475746       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.558922       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.559005       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.561349       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.573085       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.575808       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.580299       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.584854       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.589354       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.615853       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.619384       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.620787       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.652393       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.707968       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.773660       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.837176       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.847118       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.877923       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.906027       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:07.935989       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.117958       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.225496       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.268944       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:08.282999       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 23:07:11.537940       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b71cf03e9cdba6bc875bb84ece81fbe6c0e9b459c6374709445b4c9bb7bb0ebd] <==
	E0912 23:16:54.048291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:16:54.493130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:17:24.056184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:17:24.500703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:17:42.039041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-702201"
	E0912 23:17:54.062485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:17:54.508216       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0912 23:18:08.535344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="106.346µs"
	I0912 23:18:20.530085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="45.575µs"
	E0912 23:18:24.068363       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:18:24.515780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:18:54.075432       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:18:54.523507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:19:24.082904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:19:24.532626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:19:54.089376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:19:54.541281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:20:24.095604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:24.548738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:20:54.103721       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:20:54.556490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:24.110965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:24.563943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0912 23:21:54.117236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0912 23:21:54.572142       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48f2900449cb249d3be1b5ed896fcc919865fb5352c4c2c3c2900fd81676042c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0912 23:07:26.079769       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0912 23:07:26.103404       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	E0912 23:07:26.103493       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 23:07:26.407160       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0912 23:07:26.407202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 23:07:26.407229       1 server_linux.go:169] "Using iptables Proxier"
	I0912 23:07:26.409868       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 23:07:26.410257       1 server.go:483] "Version info" version="v1.31.1"
	I0912 23:07:26.410334       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 23:07:26.412278       1 config.go:199] "Starting service config controller"
	I0912 23:07:26.412380       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 23:07:26.412432       1 config.go:105] "Starting endpoint slice config controller"
	I0912 23:07:26.412449       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 23:07:26.413099       1 config.go:328] "Starting node config controller"
	I0912 23:07:26.413145       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 23:07:26.512604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 23:07:26.512633       1 shared_informer.go:320] Caches are synced for service config
	I0912 23:07:26.513184       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e600ca01711fc20a87d3df1c72dbd42d43e8be7591cc12568a99eaa737899e3] <==
	W0912 23:07:16.978485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 23:07:16.978595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:16.979812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:16.979901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.858080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:17.859051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.919149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 23:07:17.919397       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 23:07:17.919712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 23:07:17.920278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.924180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 23:07:17.924227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.934853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:07:17.934901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:17.976104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:17.976175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.209923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 23:07:18.209973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.294398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 23:07:18.294456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.309240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:18.309291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 23:07:18.395119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 23:07:18.395217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 23:07:20.469428       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 23:21:04 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:04.516674    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:21:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:09.809738    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183269809265509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:09.810063    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183269809265509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:18 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:18.515672    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:19.528876    2848 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:19.812506    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183279811950422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:19 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:19.812607    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183279811950422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:29.814223    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183289813985143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:29 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:29.814274    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183289813985143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:30 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:30.515128    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:21:39 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:39.816802    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183299816166240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:39 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:39.817177    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183299816166240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:42 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:42.515045    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:21:49 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:49.818369    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183309818016069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:49 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:49.818447    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183309818016069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:57 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:57.515465    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	Sep 12 23:21:59 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:59.820367    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183319820102263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:21:59 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:21:59.820406    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183319820102263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:22:09.822592    2848 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183329822277916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:09 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:22:09.822643    2848 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183329822277916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 12 23:22:12 default-k8s-diff-port-702201 kubelet[2848]: E0912 23:22:12.515252    2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w2dvn" podUID="778a4742-5b80-4485-956e-8f169e6dcf8f"
	
	
	==> storage-provisioner [9417a075a215d15881535a74e5318ea52a2b3531b44aff69d0ebe207c55d4919] <==
	I0912 23:07:26.355481       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:07:26.410634       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:07:26.410704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:07:26.427510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:07:26.427926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6!
	I0912 23:07:26.429603       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7d48e53-c995-4c9e-a3c1-270a7c2c2207", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6 became leader
	I0912 23:07:26.528993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-702201_4ac8217f-4748-4046-bd95-d8a4314d0af6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w2dvn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn: exit status 1 (68.318769ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w2dvn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-702201 describe pod metrics-server-6867b74b74-w2dvn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (336.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (196.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
E0912 23:20:05.704457   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.69:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.69:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (227.874866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-642238" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-642238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-642238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.825µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-642238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (224.185022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25
E0912 23:22:07.199382   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-642238 logs -n 25: (1.608425856s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-378112            | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC | 12 Sep 24 22:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-837491             | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-837491                  | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-837491 --memory=2200 --alsologtostderr   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-837491 image list                           | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p newest-cni-837491                                   | newest-cni-837491            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	| delete  | -p                                                     | disable-driver-mounts-457722 | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:55 UTC |
	|         | disable-driver-mounts-457722                           |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:55 UTC | 12 Sep 24 22:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-702201       | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-702201 | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC | 12 Sep 24 23:07 UTC |
	|         | default-k8s-diff-port-702201                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642238        | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-378112                 | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-378112                                  | embed-certs-378112           | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 23:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-380092             | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC | 12 Sep 24 22:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 22:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642238             | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC | 12 Sep 24 22:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642238                              | old-k8s-version-642238       | jenkins | v1.34.0 | 12 Sep 24 22:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-380092                  | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-380092                                   | no-preload-380092            | jenkins | v1.34.0 | 12 Sep 24 23:00 UTC | 12 Sep 24 23:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:00:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:00:21.889769   62943 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:00:21.889990   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.889999   62943 out.go:358] Setting ErrFile to fd 2...
	I0912 23:00:21.890003   62943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:00:21.890181   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 23:00:21.890675   62943 out.go:352] Setting JSON to false
	I0912 23:00:21.891538   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6164,"bootTime":1726175858,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 23:00:21.891596   62943 start.go:139] virtualization: kvm guest
	I0912 23:00:21.894002   62943 out.go:177] * [no-preload-380092] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 23:00:21.895257   62943 notify.go:220] Checking for updates...
	I0912 23:00:21.895266   62943 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:00:21.896598   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:00:21.898297   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:00:21.899605   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 23:00:21.900705   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 23:00:21.901754   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:00:21.903264   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:00:21.903642   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.903699   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.918497   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0912 23:00:21.918953   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.919516   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.919536   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.919831   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.920002   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.920213   62943 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:00:21.920527   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:21.920570   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:21.935755   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0912 23:00:21.936135   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:21.936625   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:00:21.936643   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:21.936958   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:21.937168   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:00:21.971089   62943 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 23:00:21.972555   62943 start.go:297] selected driver: kvm2
	I0912 23:00:21.972578   62943 start.go:901] validating driver "kvm2" against &{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.972702   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:00:21.973408   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.973490   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 23:00:21.988802   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 23:00:21.989203   62943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:00:21.989290   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:00:21.989305   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:00:21.989357   62943 start.go:340] cluster config:
	{Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:00:21.989504   62943 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.991829   62943 out.go:177] * Starting "no-preload-380092" primary control-plane node in "no-preload-380092" cluster
	I0912 23:00:20.185851   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:21.993075   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:00:21.993194   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mk132f7515993883658c6f8f8c277c05a18c2bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993282   62943 cache.go:107] acquiring lock: {Name:mkbf0dc68d9098b66db2e6425e6a1c64daedf32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993308   62943 cache.go:107] acquiring lock: {Name:mkb2372a7853b8fee762991ee2019645e77be1f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993360   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 102.242µs
	I0912 23:00:21.993387   62943 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:00:21.993346   62943 cache.go:107] acquiring lock: {Name:mkd3ef79aab2589c236ea8b2933d7ed6f90a65ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993393   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:00:21.993376   62943 cache.go:107] acquiring lock: {Name:mk1d88a2deb95bcad015d500fc00ce4b81f27038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993405   62943 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 112.903µs
	I0912 23:00:21.993415   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:00:21.993421   62943 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:00:21.993424   62943 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 90.812µs
	I0912 23:00:21.993432   62943 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:00:21.993403   62943 cache.go:107] acquiring lock: {Name:mk9c879437d533fd75b73d75524fea14942316d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993435   62943 start.go:360] acquireMachinesLock for no-preload-380092: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:00:21.993452   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0912 23:00:21.993472   62943 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 97.778µs
	I0912 23:00:21.993486   62943 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:00:21.993474   62943 cache.go:107] acquiring lock: {Name:mkd1cb269a32e304848dd20e7b275430f4a6b15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993496   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:00:21.993526   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:00:21.993545   62943 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 179.269µs
	I0912 23:00:21.993568   62943 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:00:21.993520   62943 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 236.598µs
	I0912 23:00:21.993587   62943 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:00:21.993522   62943 cache.go:107] acquiring lock: {Name:mka5c76f3028cb928e97cce42a012066ced2727d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:00:21.993569   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:00:21.993642   62943 cache.go:115] /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:00:21.993651   62943 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 162.198µs
	I0912 23:00:21.993648   62943 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 220.493µs
	I0912 23:00:21.993662   62943 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:00:21.993668   62943 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:00:21.993687   62943 cache.go:87] Successfully saved all images to host disk.
	I0912 23:00:26.265938   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:29.337872   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:35.417928   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:38.489932   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:44.569877   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:47.641914   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:53.721910   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:56.793972   61354 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.214:22: connect: no route to host
	I0912 23:00:59.798765   61904 start.go:364] duration metric: took 3m43.915954079s to acquireMachinesLock for "embed-certs-378112"
	I0912 23:00:59.798812   61904 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:00:59.798822   61904 fix.go:54] fixHost starting: 
	I0912 23:00:59.799124   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:00:59.799159   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:00:59.814494   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0912 23:00:59.815035   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:00:59.815500   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:00:59.815519   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:00:59.815820   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:00:59.815997   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:00:59.816114   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:00:59.817884   61904 fix.go:112] recreateIfNeeded on embed-certs-378112: state=Stopped err=<nil>
	I0912 23:00:59.817912   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	W0912 23:00:59.818088   61904 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:00:59.820071   61904 out.go:177] * Restarting existing kvm2 VM for "embed-certs-378112" ...
	I0912 23:00:59.821271   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Start
	I0912 23:00:59.821455   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring networks are active...
	I0912 23:00:59.822528   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network default is active
	I0912 23:00:59.822941   61904 main.go:141] libmachine: (embed-certs-378112) Ensuring network mk-embed-certs-378112 is active
	I0912 23:00:59.823348   61904 main.go:141] libmachine: (embed-certs-378112) Getting domain xml...
	I0912 23:00:59.824031   61904 main.go:141] libmachine: (embed-certs-378112) Creating domain...
	I0912 23:00:59.796296   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:00:59.796341   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796635   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:00:59.796660   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:00:59.796845   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:00:59.798593   61354 machine.go:96] duration metric: took 4m34.624878077s to provisionDockerMachine
	I0912 23:00:59.798633   61354 fix.go:56] duration metric: took 4m34.652510972s for fixHost
	I0912 23:00:59.798640   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 4m34.652554084s
	W0912 23:00:59.798663   61354 start.go:714] error starting host: provision: host is not running
	W0912 23:00:59.798748   61354 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0912 23:00:59.798762   61354 start.go:729] Will try again in 5 seconds ...
	I0912 23:01:01.051149   61904 main.go:141] libmachine: (embed-certs-378112) Waiting to get IP...
	I0912 23:01:01.051945   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.052463   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.052494   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.052421   63128 retry.go:31] will retry after 247.962572ms: waiting for machine to come up
	I0912 23:01:01.302159   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.302677   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.302706   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.302624   63128 retry.go:31] will retry after 354.212029ms: waiting for machine to come up
	I0912 23:01:01.658402   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:01.658880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:01.658923   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:01.658848   63128 retry.go:31] will retry after 461.984481ms: waiting for machine to come up
	I0912 23:01:02.122592   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.122981   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.123015   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.122930   63128 retry.go:31] will retry after 404.928951ms: waiting for machine to come up
	I0912 23:01:02.529423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:02.529906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:02.529932   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:02.529856   63128 retry.go:31] will retry after 684.912015ms: waiting for machine to come up
	I0912 23:01:03.216924   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.217408   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.217433   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.217357   63128 retry.go:31] will retry after 765.507778ms: waiting for machine to come up
	I0912 23:01:03.984272   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:03.984787   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:03.984820   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:03.984726   63128 retry.go:31] will retry after 1.048709598s: waiting for machine to come up
	I0912 23:01:05.035381   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:05.035885   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:05.035925   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:05.035809   63128 retry.go:31] will retry after 1.488143245s: waiting for machine to come up
	I0912 23:01:04.800694   61354 start.go:360] acquireMachinesLock for default-k8s-diff-port-702201: {Name:mkbb0a9e58b1349e86a63b6069c42d4248d92c3b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 23:01:06.526483   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:06.526858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:06.526896   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:06.526800   63128 retry.go:31] will retry after 1.272485972s: waiting for machine to come up
	I0912 23:01:07.801588   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:07.802071   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:07.802103   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:07.802022   63128 retry.go:31] will retry after 1.559805672s: waiting for machine to come up
	I0912 23:01:09.363156   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:09.363662   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:09.363683   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:09.363611   63128 retry.go:31] will retry after 1.893092295s: waiting for machine to come up
	I0912 23:01:11.258694   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:11.259346   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:11.259376   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:11.259304   63128 retry.go:31] will retry after 3.533141843s: waiting for machine to come up
	I0912 23:01:14.796948   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:14.797444   61904 main.go:141] libmachine: (embed-certs-378112) DBG | unable to find current IP address of domain embed-certs-378112 in network mk-embed-certs-378112
	I0912 23:01:14.797468   61904 main.go:141] libmachine: (embed-certs-378112) DBG | I0912 23:01:14.797389   63128 retry.go:31] will retry after 3.889332888s: waiting for machine to come up
	I0912 23:01:19.958932   62386 start.go:364] duration metric: took 3m0.532494588s to acquireMachinesLock for "old-k8s-version-642238"
	I0912 23:01:19.958994   62386 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:19.959005   62386 fix.go:54] fixHost starting: 
	I0912 23:01:19.959383   62386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:19.959418   62386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:19.976721   62386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0912 23:01:19.977134   62386 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:19.977648   62386 main.go:141] libmachine: Using API Version  1
	I0912 23:01:19.977673   62386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:19.977988   62386 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:19.978166   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:19.978325   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetState
	I0912 23:01:19.979909   62386 fix.go:112] recreateIfNeeded on old-k8s-version-642238: state=Stopped err=<nil>
	I0912 23:01:19.979934   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	W0912 23:01:19.980079   62386 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:19.982289   62386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642238" ...
	I0912 23:01:18.690761   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691185   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has current primary IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.691206   61904 main.go:141] libmachine: (embed-certs-378112) Found IP for machine: 192.168.72.96
	I0912 23:01:18.691218   61904 main.go:141] libmachine: (embed-certs-378112) Reserving static IP address...
	I0912 23:01:18.691614   61904 main.go:141] libmachine: (embed-certs-378112) Reserved static IP address: 192.168.72.96
	I0912 23:01:18.691642   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.691654   61904 main.go:141] libmachine: (embed-certs-378112) Waiting for SSH to be available...
	I0912 23:01:18.691678   61904 main.go:141] libmachine: (embed-certs-378112) DBG | skip adding static IP to network mk-embed-certs-378112 - found existing host DHCP lease matching {name: "embed-certs-378112", mac: "52:54:00:71:b2:49", ip: "192.168.72.96"}
	I0912 23:01:18.691690   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Getting to WaitForSSH function...
	I0912 23:01:18.693747   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694054   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.694077   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.694273   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH client type: external
	I0912 23:01:18.694300   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa (-rw-------)
	I0912 23:01:18.694330   61904 main.go:141] libmachine: (embed-certs-378112) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:18.694345   61904 main.go:141] libmachine: (embed-certs-378112) DBG | About to run SSH command:
	I0912 23:01:18.694358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | exit 0
	I0912 23:01:18.821647   61904 main.go:141] libmachine: (embed-certs-378112) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:18.822074   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetConfigRaw
	I0912 23:01:18.822765   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:18.825154   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825481   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.825510   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.825842   61904 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/config.json ...
	I0912 23:01:18.826026   61904 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:18.826043   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:18.826248   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.828540   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.828878   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.828906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.829009   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.829224   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829429   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.829555   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.829750   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.829926   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.829937   61904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:18.941789   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:18.941824   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942076   61904 buildroot.go:166] provisioning hostname "embed-certs-378112"
	I0912 23:01:18.942099   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:18.942278   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:18.944880   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945173   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:18.945221   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:18.945347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:18.945525   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945733   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:18.945913   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:18.946125   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:18.946330   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:18.946350   61904 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-378112 && echo "embed-certs-378112" | sudo tee /etc/hostname
	I0912 23:01:19.071180   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-378112
	
	I0912 23:01:19.071207   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.074121   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074553   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.074583   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.074803   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.075004   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075175   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.075319   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.075472   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.075691   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.075710   61904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-378112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-378112/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-378112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:19.198049   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:19.198081   61904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:19.198131   61904 buildroot.go:174] setting up certificates
	I0912 23:01:19.198140   61904 provision.go:84] configureAuth start
	I0912 23:01:19.198153   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetMachineName
	I0912 23:01:19.198461   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.201194   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201504   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.201532   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.201729   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.204100   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204538   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.204562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.204706   61904 provision.go:143] copyHostCerts
	I0912 23:01:19.204767   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:19.204782   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:19.204851   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:19.204951   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:19.204960   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:19.204985   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:19.205045   61904 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:19.205053   61904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:19.205076   61904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:19.205132   61904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.embed-certs-378112 san=[127.0.0.1 192.168.72.96 embed-certs-378112 localhost minikube]
	I0912 23:01:19.311879   61904 provision.go:177] copyRemoteCerts
	I0912 23:01:19.311937   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:19.311962   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.314423   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.314821   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.314858   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.315029   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.315191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.315357   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.315485   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.399171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:19.423218   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:19.446073   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:19.468351   61904 provision.go:87] duration metric: took 270.179029ms to configureAuth
	I0912 23:01:19.468380   61904 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:19.468543   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:19.468609   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.471457   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.471829   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.471857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.472057   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.472257   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472438   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.472614   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.472756   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.472915   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.472928   61904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:19.710250   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:19.710278   61904 machine.go:96] duration metric: took 884.238347ms to provisionDockerMachine
	I0912 23:01:19.710298   61904 start.go:293] postStartSetup for "embed-certs-378112" (driver="kvm2")
	I0912 23:01:19.710310   61904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:19.710324   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.710640   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:19.710668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.713442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713731   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.713759   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.713948   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.714180   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.714347   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.714491   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.800949   61904 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:19.805072   61904 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:19.805103   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:19.805212   61904 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:19.805309   61904 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:19.805449   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:19.815070   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:19.839585   61904 start.go:296] duration metric: took 129.271232ms for postStartSetup
	I0912 23:01:19.839634   61904 fix.go:56] duration metric: took 20.040811123s for fixHost
	I0912 23:01:19.839656   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.843048   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843354   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.843385   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.843547   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.843755   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.843933   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.844078   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.844257   61904 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:19.844432   61904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.72.96 22 <nil> <nil>}
	I0912 23:01:19.844443   61904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:19.958747   61904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182079.929826480
	
	I0912 23:01:19.958771   61904 fix.go:216] guest clock: 1726182079.929826480
	I0912 23:01:19.958779   61904 fix.go:229] Guest: 2024-09-12 23:01:19.92982648 +0000 UTC Remote: 2024-09-12 23:01:19.839638734 +0000 UTC m=+244.095238395 (delta=90.187746ms)
	I0912 23:01:19.958826   61904 fix.go:200] guest clock delta is within tolerance: 90.187746ms
	I0912 23:01:19.958832   61904 start.go:83] releasing machines lock for "embed-certs-378112", held for 20.160038696s
	I0912 23:01:19.958866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.959202   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:19.962158   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962528   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.962562   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.962743   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963246   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963421   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:19.963518   61904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:19.963564   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.963703   61904 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:19.963766   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:19.966317   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966517   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966692   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.966723   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.966921   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.966977   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:19.967023   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:19.967100   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967191   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:19.967268   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967332   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:19.967395   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:19.967439   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:19.967594   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:20.054413   61904 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:20.087300   61904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:20.235085   61904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:20.240843   61904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:20.240922   61904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:20.256317   61904 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:20.256341   61904 start.go:495] detecting cgroup driver to use...
	I0912 23:01:20.256411   61904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:20.271684   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:20.285491   61904 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:20.285562   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:20.298889   61904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:20.314455   61904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:20.438483   61904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:20.594684   61904 docker.go:233] disabling docker service ...
	I0912 23:01:20.594761   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:20.609090   61904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:20.624440   61904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:20.747699   61904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:20.899726   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:20.914107   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:20.933523   61904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:20.933599   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.946067   61904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:20.946129   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.957575   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.968759   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:20.980280   61904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:20.991281   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.002926   61904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.021743   61904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:21.032256   61904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:21.041783   61904 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:21.041853   61904 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:21.054605   61904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:21.064411   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:21.198195   61904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:21.289923   61904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:21.290018   61904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:21.294505   61904 start.go:563] Will wait 60s for crictl version
	I0912 23:01:21.294572   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:01:21.297928   61904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:21.335650   61904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:21.335734   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.364876   61904 ssh_runner.go:195] Run: crio --version
	I0912 23:01:21.395463   61904 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:19.983746   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .Start
	I0912 23:01:19.983971   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring networks are active...
	I0912 23:01:19.984890   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network default is active
	I0912 23:01:19.985345   62386 main.go:141] libmachine: (old-k8s-version-642238) Ensuring network mk-old-k8s-version-642238 is active
	I0912 23:01:19.985788   62386 main.go:141] libmachine: (old-k8s-version-642238) Getting domain xml...
	I0912 23:01:19.986827   62386 main.go:141] libmachine: (old-k8s-version-642238) Creating domain...
	I0912 23:01:21.258792   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting to get IP...
	I0912 23:01:21.259838   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.260300   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.260434   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.260300   63267 retry.go:31] will retry after 272.429869ms: waiting for machine to come up
	I0912 23:01:21.534713   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.535102   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.535131   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.535060   63267 retry.go:31] will retry after 352.031053ms: waiting for machine to come up
	I0912 23:01:21.888724   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:21.889235   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:21.889260   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:21.889212   63267 retry.go:31] will retry after 405.51409ms: waiting for machine to come up
	I0912 23:01:22.296746   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.297242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.297286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.297190   63267 retry.go:31] will retry after 607.76308ms: waiting for machine to come up
	I0912 23:01:22.907030   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:22.907784   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:22.907824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:22.907659   63267 retry.go:31] will retry after 692.773261ms: waiting for machine to come up
	I0912 23:01:23.602242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:23.602679   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:23.602701   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:23.602642   63267 retry.go:31] will retry after 591.018151ms: waiting for machine to come up
	I0912 23:01:24.195571   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:24.196100   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:24.196130   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:24.196046   63267 retry.go:31] will retry after 1.185264475s: waiting for machine to come up
	I0912 23:01:21.396852   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetIP
	I0912 23:01:21.400018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400456   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:21.400488   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:21.400730   61904 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:21.404606   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:21.416408   61904 kubeadm.go:883] updating cluster {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:21.416529   61904 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:01:21.416571   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:21.449799   61904 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:01:21.449860   61904 ssh_runner.go:195] Run: which lz4
	I0912 23:01:21.453658   61904 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:21.457641   61904 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:21.457676   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:01:22.735022   61904 crio.go:462] duration metric: took 1.281408113s to copy over tarball
	I0912 23:01:22.735128   61904 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:24.783893   61904 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.048732092s)
	I0912 23:01:24.783935   61904 crio.go:469] duration metric: took 2.048876223s to extract the tarball
	I0912 23:01:24.783945   61904 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:24.820170   61904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:24.866833   61904 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:01:24.866861   61904 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:01:24.866870   61904 kubeadm.go:934] updating node { 192.168.72.96 8443 v1.31.1 crio true true} ...
	I0912 23:01:24.866990   61904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-378112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:24.867073   61904 ssh_runner.go:195] Run: crio config
	I0912 23:01:24.912893   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:24.912924   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:24.912940   61904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:24.912967   61904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.96 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-378112 NodeName:embed-certs-378112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:01:24.913155   61904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-378112"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:24.913230   61904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:01:24.922946   61904 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:24.923013   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:24.932931   61904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:01:24.949482   61904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:24.965877   61904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0912 23:01:24.983125   61904 ssh_runner.go:195] Run: grep 192.168.72.96	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:24.987056   61904 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:24.998939   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:25.113496   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:25.129703   61904 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112 for IP: 192.168.72.96
	I0912 23:01:25.129726   61904 certs.go:194] generating shared ca certs ...
	I0912 23:01:25.129741   61904 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:25.129971   61904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:25.130086   61904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:25.130110   61904 certs.go:256] generating profile certs ...
	I0912 23:01:25.130237   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/client.key
	I0912 23:01:25.130340   61904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key.dbbe0c1f
	I0912 23:01:25.130407   61904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key
	I0912 23:01:25.130579   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:25.130626   61904 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:25.130651   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:25.130703   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:25.130745   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:25.130792   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:25.130860   61904 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:25.131603   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:25.176163   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:25.220174   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:25.265831   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:25.296965   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0912 23:01:25.321038   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:01:25.345231   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:25.369171   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/embed-certs-378112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:01:25.394204   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:25.417915   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:25.442303   61904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:25.465565   61904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:25.482722   61904 ssh_runner.go:195] Run: openssl version
	I0912 23:01:25.488448   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:25.499394   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503818   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.503891   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:25.509382   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:25.519646   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:25.530205   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534926   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.534995   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:25.540498   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:25.551236   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:25.561851   61904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566492   61904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.566560   61904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:25.572221   61904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:25.582775   61904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:25.587274   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:25.593126   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:25.598929   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:25.604590   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:25.610344   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:25.615931   61904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:25.621575   61904 kubeadm.go:392] StartCluster: {Name:embed-certs-378112 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-378112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:25.621708   61904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:25.621771   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.659165   61904 cri.go:89] found id: ""
	I0912 23:01:25.659225   61904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:25.670718   61904 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:25.670740   61904 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:25.670812   61904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:25.680672   61904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:25.681705   61904 kubeconfig.go:125] found "embed-certs-378112" server: "https://192.168.72.96:8443"
	I0912 23:01:25.683693   61904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:25.693765   61904 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.96
	I0912 23:01:25.693795   61904 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:25.693805   61904 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:25.693874   61904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:25.728800   61904 cri.go:89] found id: ""
	I0912 23:01:25.728879   61904 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:25.744949   61904 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:25.754735   61904 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:25.754756   61904 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:25.754820   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:25.763678   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:25.763740   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:25.772744   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:25.383446   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:25.383892   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:25.383912   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:25.383847   63267 retry.go:31] will retry after 1.399744787s: waiting for machine to come up
	I0912 23:01:26.785939   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:26.786489   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:26.786520   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:26.786425   63267 retry.go:31] will retry after 1.336566382s: waiting for machine to come up
	I0912 23:01:28.124647   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:28.125141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:28.125172   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:28.125087   63267 retry.go:31] will retry after 1.527292388s: waiting for machine to come up
	I0912 23:01:25.782080   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:25.782143   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:25.791585   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.801238   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:25.801315   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:25.810819   61904 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:25.819786   61904 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:25.819888   61904 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:25.829135   61904 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:25.838572   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:25.944339   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.566348   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.771125   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.859227   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:26.946762   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:26.946884   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.447964   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:27.947775   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.447415   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.947184   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:28.963513   61904 api_server.go:72] duration metric: took 2.016750981s to wait for apiserver process to appear ...
	I0912 23:01:28.963554   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:01:28.963577   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:28.964155   61904 api_server.go:269] stopped: https://192.168.72.96:8443/healthz: Get "https://192.168.72.96:8443/healthz": dial tcp 192.168.72.96:8443: connect: connection refused
	I0912 23:01:29.463718   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.369513   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.369555   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.369571   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.423901   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:01:31.423936   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:01:31.464148   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.469495   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.469522   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:31.963894   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:31.972640   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:31.972671   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.463809   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.475603   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:01:32.475640   61904 api_server.go:103] status: https://192.168.72.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:01:32.964250   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:01:32.968710   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:01:32.975414   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:01:32.975442   61904 api_server.go:131] duration metric: took 4.011879751s to wait for apiserver health ...
	I0912 23:01:32.975451   61904 cni.go:84] Creating CNI manager for ""
	I0912 23:01:32.975456   61904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:32.977249   61904 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:01:29.654841   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:29.655236   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:29.655264   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:29.655183   63267 retry.go:31] will retry after 2.34568858s: waiting for machine to come up
	I0912 23:01:32.002617   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:32.003211   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:32.003242   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:32.003150   63267 retry.go:31] will retry after 2.273120763s: waiting for machine to come up
	I0912 23:01:34.279665   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:34.280098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | unable to find current IP address of domain old-k8s-version-642238 in network mk-old-k8s-version-642238
	I0912 23:01:34.280122   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | I0912 23:01:34.280064   63267 retry.go:31] will retry after 3.937702941s: waiting for machine to come up
	I0912 23:01:32.978610   61904 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:01:32.994079   61904 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:01:33.042253   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:01:33.052323   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:01:33.052361   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:01:33.052369   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:01:33.052376   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:01:33.052387   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:01:33.052396   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:01:33.052406   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:01:33.052418   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:01:33.052426   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:01:33.052438   61904 system_pods.go:74] duration metric: took 10.162234ms to wait for pod list to return data ...
	I0912 23:01:33.052448   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:01:33.060217   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:01:33.060263   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:01:33.060284   61904 node_conditions.go:105] duration metric: took 7.831444ms to run NodePressure ...
	I0912 23:01:33.060338   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:33.331554   61904 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337181   61904 kubeadm.go:739] kubelet initialised
	I0912 23:01:33.337202   61904 kubeadm.go:740] duration metric: took 5.622367ms waiting for restarted kubelet to initialise ...
	I0912 23:01:33.337209   61904 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:33.342427   61904 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.346602   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346624   61904 pod_ready.go:82] duration metric: took 4.167981ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.346635   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.346643   61904 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.350240   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350258   61904 pod_ready.go:82] duration metric: took 3.605305ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.350267   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "etcd-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.350274   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.353756   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353775   61904 pod_ready.go:82] duration metric: took 3.492388ms for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.353785   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.353792   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.445529   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445574   61904 pod_ready.go:82] duration metric: took 91.770466ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.445588   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.445597   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:33.845443   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845470   61904 pod_ready.go:82] duration metric: took 399.864816ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:33.845479   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-proxy-fvbbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:33.845484   61904 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.245943   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245969   61904 pod_ready.go:82] duration metric: took 400.478543ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.245979   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.245985   61904 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:34.651801   61904 pod_ready.go:98] node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651826   61904 pod_ready.go:82] duration metric: took 405.832705ms for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:01:34.651836   61904 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-378112" hosting pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:34.651843   61904 pod_ready.go:39] duration metric: took 1.314625851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:34.651859   61904 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:01:34.665332   61904 ops.go:34] apiserver oom_adj: -16
	I0912 23:01:34.665357   61904 kubeadm.go:597] duration metric: took 8.994610882s to restartPrimaryControlPlane
	I0912 23:01:34.665366   61904 kubeadm.go:394] duration metric: took 9.043796768s to StartCluster
	I0912 23:01:34.665381   61904 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.665454   61904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:34.667036   61904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:34.667262   61904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:01:34.667363   61904 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:01:34.667450   61904 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-378112"
	I0912 23:01:34.667468   61904 config.go:182] Loaded profile config "embed-certs-378112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:34.667476   61904 addons.go:69] Setting default-storageclass=true in profile "embed-certs-378112"
	I0912 23:01:34.667543   61904 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-378112"
	I0912 23:01:34.667520   61904 addons.go:69] Setting metrics-server=true in profile "embed-certs-378112"
	I0912 23:01:34.667609   61904 addons.go:234] Setting addon metrics-server=true in "embed-certs-378112"
	W0912 23:01:34.667624   61904 addons.go:243] addon metrics-server should already be in state true
	I0912 23:01:34.667661   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667490   61904 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-378112"
	W0912 23:01:34.667710   61904 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:01:34.667778   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.667994   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668049   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668138   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668155   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.668171   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.668180   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.670091   61904 out.go:177] * Verifying Kubernetes components...
	I0912 23:01:34.671777   61904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:34.683876   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0912 23:01:34.684025   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0912 23:01:34.684434   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684541   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.684995   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685014   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685118   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.685140   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685468   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.685668   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.686104   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.686156   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.688211   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0912 23:01:34.688607   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.689047   61904 addons.go:234] Setting addon default-storageclass=true in "embed-certs-378112"
	W0912 23:01:34.689066   61904 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:01:34.689091   61904 host.go:66] Checking if "embed-certs-378112" exists ...
	I0912 23:01:34.689116   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.689146   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.689478   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.689501   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.689511   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.690057   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.690083   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.702965   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0912 23:01:34.703535   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.704131   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.704151   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.704178   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0912 23:01:34.704481   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.704684   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.704684   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.705101   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.705122   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.705413   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.705561   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.706872   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.707279   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.708583   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0912 23:01:34.708752   61904 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:34.708828   61904 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:01:34.708966   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.709420   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.709442   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.709901   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.710348   61904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:34.710352   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:01:34.710368   61904 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:01:34.710382   61904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:34.710397   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.710705   61904 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:34.713777   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:01:34.713809   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.717857   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718160   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718335   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718358   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718442   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.718473   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.718651   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718727   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.718812   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718866   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.718988   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719039   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.719144   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.719169   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.730675   61904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0912 23:01:34.731210   61904 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:34.731901   61904 main.go:141] libmachine: Using API Version  1
	I0912 23:01:34.731934   61904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:34.732317   61904 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:34.732493   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetState
	I0912 23:01:34.734338   61904 main.go:141] libmachine: (embed-certs-378112) Calling .DriverName
	I0912 23:01:34.734601   61904 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:34.734615   61904 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:01:34.734637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHHostname
	I0912 23:01:34.737958   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738401   61904 main.go:141] libmachine: (embed-certs-378112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:b2:49", ip: ""} in network mk-embed-certs-378112: {Iface:virbr4 ExpiryTime:2024-09-12 23:53:21 +0000 UTC Type:0 Mac:52:54:00:71:b2:49 Iaid: IPaddr:192.168.72.96 Prefix:24 Hostname:embed-certs-378112 Clientid:01:52:54:00:71:b2:49}
	I0912 23:01:34.738429   61904 main.go:141] libmachine: (embed-certs-378112) DBG | domain embed-certs-378112 has defined IP address 192.168.72.96 and MAC address 52:54:00:71:b2:49 in network mk-embed-certs-378112
	I0912 23:01:34.738637   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHPort
	I0912 23:01:34.738823   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHKeyPath
	I0912 23:01:34.739015   61904 main.go:141] libmachine: (embed-certs-378112) Calling .GetSSHUsername
	I0912 23:01:34.739166   61904 sshutil.go:53] new ssh client: &{IP:192.168.72.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/embed-certs-378112/id_rsa Username:docker}
	I0912 23:01:34.873510   61904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:34.891329   61904 node_ready.go:35] waiting up to 6m0s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:34.991135   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:01:34.991169   61904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:01:35.007241   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:01:35.018684   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:01:35.018712   61904 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:01:35.028842   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:01:35.047693   61904 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:35.047720   61904 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:01:35.101399   61904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:01:36.046822   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03953394s)
	I0912 23:01:36.046851   61904 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.017977641s)
	I0912 23:01:36.046882   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046889   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.046900   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.046901   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047207   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047221   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047230   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047237   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047269   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047280   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047312   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.047378   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.047577   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047624   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047637   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.047639   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.047691   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.047705   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.055732   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.055751   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.056018   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.056072   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.056085   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062586   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062612   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.062906   61904 main.go:141] libmachine: (embed-certs-378112) DBG | Closing plugin on server side
	I0912 23:01:36.062920   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.062936   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.062955   61904 main.go:141] libmachine: Making call to close driver server
	I0912 23:01:36.062979   61904 main.go:141] libmachine: (embed-certs-378112) Calling .Close
	I0912 23:01:36.063225   61904 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:01:36.063243   61904 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:01:36.063254   61904 addons.go:475] Verifying addon metrics-server=true in "embed-certs-378112"
	I0912 23:01:36.065321   61904 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:01:38.221947   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222408   62386 main.go:141] libmachine: (old-k8s-version-642238) Found IP for machine: 192.168.61.69
	I0912 23:01:38.222437   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has current primary IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.222447   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserving static IP address...
	I0912 23:01:38.222943   62386 main.go:141] libmachine: (old-k8s-version-642238) Reserved static IP address: 192.168.61.69
	I0912 23:01:38.222983   62386 main.go:141] libmachine: (old-k8s-version-642238) Waiting for SSH to be available...
	I0912 23:01:38.223007   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.223057   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | skip adding static IP to network mk-old-k8s-version-642238 - found existing host DHCP lease matching {name: "old-k8s-version-642238", mac: "52:54:00:75:cb:57", ip: "192.168.61.69"}
	I0912 23:01:38.223079   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Getting to WaitForSSH function...
	I0912 23:01:38.225720   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226121   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.226155   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.226286   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH client type: external
	I0912 23:01:38.226308   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa (-rw-------)
	I0912 23:01:38.226341   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:38.226357   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | About to run SSH command:
	I0912 23:01:38.226368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | exit 0
	I0912 23:01:38.357945   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:38.358320   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetConfigRaw
	I0912 23:01:38.358887   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.361728   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362098   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.362133   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.362372   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/config.json ...
	I0912 23:01:38.362640   62386 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:38.362663   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:38.362897   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.365251   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365627   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.365656   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.365798   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.365969   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366123   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.366251   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.366468   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.366691   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.366707   62386 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:38.477548   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:38.477575   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.477818   62386 buildroot.go:166] provisioning hostname "old-k8s-version-642238"
	I0912 23:01:38.477843   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.478029   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.480368   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480660   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.480683   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.480802   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.480981   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481142   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.481287   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.481630   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.481846   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.481864   62386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642238 && echo "old-k8s-version-642238" | sudo tee /etc/hostname
	I0912 23:01:38.606686   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642238
	
	I0912 23:01:38.606721   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.609331   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609682   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.609705   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.609867   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.610071   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610297   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.610463   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.610792   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:38.610974   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:38.610991   62386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642238/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:38.729561   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:38.729588   62386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:38.729664   62386 buildroot.go:174] setting up certificates
	I0912 23:01:38.729674   62386 provision.go:84] configureAuth start
	I0912 23:01:38.729686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetMachineName
	I0912 23:01:38.729945   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:38.732718   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733269   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.733302   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.733481   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.735610   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.735925   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.735950   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.736074   62386 provision.go:143] copyHostCerts
	I0912 23:01:38.736129   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:38.736142   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:38.736197   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:38.736293   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:38.736306   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:38.736330   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:38.736390   62386 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:38.736397   62386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:38.736413   62386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:38.736460   62386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642238 san=[127.0.0.1 192.168.61.69 localhost minikube old-k8s-version-642238]
	I0912 23:01:38.940760   62386 provision.go:177] copyRemoteCerts
	I0912 23:01:38.940819   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:38.940846   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:38.943954   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944274   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:38.944304   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:38.944479   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:38.944688   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:38.944884   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:38.945023   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.032396   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:39.055559   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:01:39.081979   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:01:39.108245   62386 provision.go:87] duration metric: took 378.558125ms to configureAuth
	I0912 23:01:39.108276   62386 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:39.108456   62386 config.go:182] Loaded profile config "old-k8s-version-642238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 23:01:39.108515   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.111321   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111737   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.111759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.111956   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.112175   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112399   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.112552   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.112721   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.112939   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.112955   62386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:39.582214   62943 start.go:364] duration metric: took 1m17.588760987s to acquireMachinesLock for "no-preload-380092"
	I0912 23:01:39.582282   62943 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:39.582290   62943 fix.go:54] fixHost starting: 
	I0912 23:01:39.582684   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:39.582733   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:39.598752   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0912 23:01:39.599113   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:39.599558   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:01:39.599578   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:39.599939   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:39.600128   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:39.600299   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:01:39.601919   62943 fix.go:112] recreateIfNeeded on no-preload-380092: state=Stopped err=<nil>
	I0912 23:01:39.601948   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	W0912 23:01:39.602105   62943 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:39.604113   62943 out.go:177] * Restarting existing kvm2 VM for "no-preload-380092" ...
	I0912 23:01:36.066914   61904 addons.go:510] duration metric: took 1.399549943s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:01:36.894531   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:38.895084   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:39.333662   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:39.333695   62386 machine.go:96] duration metric: took 971.039233ms to provisionDockerMachine
	I0912 23:01:39.333712   62386 start.go:293] postStartSetup for "old-k8s-version-642238" (driver="kvm2")
	I0912 23:01:39.333728   62386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:39.333755   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.334078   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:39.334110   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.336759   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337144   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.337185   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.337326   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.337492   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.337649   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.337757   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.424344   62386 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:39.428560   62386 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:39.428586   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:39.428651   62386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:39.428720   62386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:39.428822   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:39.438578   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:39.466955   62386 start.go:296] duration metric: took 133.228748ms for postStartSetup
	I0912 23:01:39.466993   62386 fix.go:56] duration metric: took 19.507989112s for fixHost
	I0912 23:01:39.467011   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.469732   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470141   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.470177   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.470446   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.470662   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470820   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.470952   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.471079   62386 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:39.471234   62386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.61.69 22 <nil> <nil>}
	I0912 23:01:39.471243   62386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:39.582078   62386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182099.559242358
	
	I0912 23:01:39.582101   62386 fix.go:216] guest clock: 1726182099.559242358
	I0912 23:01:39.582108   62386 fix.go:229] Guest: 2024-09-12 23:01:39.559242358 +0000 UTC Remote: 2024-09-12 23:01:39.466996536 +0000 UTC m=+200.180679357 (delta=92.245822ms)
	I0912 23:01:39.582148   62386 fix.go:200] guest clock delta is within tolerance: 92.245822ms
	I0912 23:01:39.582153   62386 start.go:83] releasing machines lock for "old-k8s-version-642238", held for 19.623187273s
	I0912 23:01:39.582177   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.582449   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:39.585170   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585556   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.585595   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.585770   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586282   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586471   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .DriverName
	I0912 23:01:39.586548   62386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:39.586590   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.586706   62386 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:39.586734   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHHostname
	I0912 23:01:39.589355   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589769   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589802   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.589824   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.589990   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590163   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590229   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:39.590258   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:39.590331   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590413   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHPort
	I0912 23:01:39.590491   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.590525   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHKeyPath
	I0912 23:01:39.590621   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetSSHUsername
	I0912 23:01:39.590717   62386 sshutil.go:53] new ssh client: &{IP:192.168.61.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/old-k8s-version-642238/id_rsa Username:docker}
	I0912 23:01:39.709188   62386 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:39.714703   62386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:39.867112   62386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:39.874818   62386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:39.874897   62386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:39.894532   62386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:39.894558   62386 start.go:495] detecting cgroup driver to use...
	I0912 23:01:39.894611   62386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:39.911715   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:39.927113   62386 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:39.927181   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:39.946720   62386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:39.966602   62386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:40.132813   62386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:40.318613   62386 docker.go:233] disabling docker service ...
	I0912 23:01:40.318764   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:40.337557   62386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:40.355312   62386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:40.507081   62386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:40.623129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:40.637980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:40.658137   62386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 23:01:40.658197   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.672985   62386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:40.673041   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.687684   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.699586   62386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:40.711468   62386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:40.722380   62386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:40.733057   62386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:40.733126   62386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:40.748577   62386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:40.758735   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:40.883686   62386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:01:40.977996   62386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:01:40.978065   62386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:01:40.984192   62386 start.go:563] Will wait 60s for crictl version
	I0912 23:01:40.984257   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:40.988379   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:01:41.027758   62386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:01:41.027855   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.057198   62386 ssh_runner.go:195] Run: crio --version
	I0912 23:01:41.091414   62386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0912 23:01:39.605199   62943 main.go:141] libmachine: (no-preload-380092) Calling .Start
	I0912 23:01:39.605356   62943 main.go:141] libmachine: (no-preload-380092) Ensuring networks are active...
	I0912 23:01:39.606295   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network default is active
	I0912 23:01:39.606540   62943 main.go:141] libmachine: (no-preload-380092) Ensuring network mk-no-preload-380092 is active
	I0912 23:01:39.606902   62943 main.go:141] libmachine: (no-preload-380092) Getting domain xml...
	I0912 23:01:39.607582   62943 main.go:141] libmachine: (no-preload-380092) Creating domain...
	I0912 23:01:40.958156   62943 main.go:141] libmachine: (no-preload-380092) Waiting to get IP...
	I0912 23:01:40.959304   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:40.959775   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:40.959848   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:40.959761   63470 retry.go:31] will retry after 260.507819ms: waiting for machine to come up
	I0912 23:01:41.222360   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.222860   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.222897   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.222793   63470 retry.go:31] will retry after 325.875384ms: waiting for machine to come up
	I0912 23:01:41.550174   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:41.550617   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:41.550642   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:41.550563   63470 retry.go:31] will retry after 466.239328ms: waiting for machine to come up
	I0912 23:01:41.092686   62386 main.go:141] libmachine: (old-k8s-version-642238) Calling .GetIP
	I0912 23:01:41.096196   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.096806   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cb:57", ip: ""} in network mk-old-k8s-version-642238: {Iface:virbr3 ExpiryTime:2024-09-13 00:01:30 +0000 UTC Type:0 Mac:52:54:00:75:cb:57 Iaid: IPaddr:192.168.61.69 Prefix:24 Hostname:old-k8s-version-642238 Clientid:01:52:54:00:75:cb:57}
	I0912 23:01:41.096843   62386 main.go:141] libmachine: (old-k8s-version-642238) DBG | domain old-k8s-version-642238 has defined IP address 192.168.61.69 and MAC address 52:54:00:75:cb:57 in network mk-old-k8s-version-642238
	I0912 23:01:41.097167   62386 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0912 23:01:41.101509   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:41.115914   62386 kubeadm.go:883] updating cluster {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:01:41.116230   62386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 23:01:41.116327   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:41.164309   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:41.164389   62386 ssh_runner.go:195] Run: which lz4
	I0912 23:01:41.168669   62386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:01:41.172973   62386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:01:41.173008   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0912 23:01:42.662843   62386 crio.go:462] duration metric: took 1.494204864s to copy over tarball
	I0912 23:01:42.662921   62386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:01:40.895957   61904 node_ready.go:53] node "embed-certs-378112" has status "Ready":"False"
	I0912 23:01:41.896265   61904 node_ready.go:49] node "embed-certs-378112" has status "Ready":"True"
	I0912 23:01:41.896293   61904 node_ready.go:38] duration metric: took 7.004932553s for node "embed-certs-378112" to be "Ready" ...
	I0912 23:01:41.896304   61904 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:01:41.903665   61904 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911837   61904 pod_ready.go:93] pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.911862   61904 pod_ready.go:82] duration metric: took 8.168974ms for pod "coredns-7c65d6cfc9-m8t6h" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.911875   61904 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920007   61904 pod_ready.go:93] pod "etcd-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:41.920032   61904 pod_ready.go:82] duration metric: took 8.150491ms for pod "etcd-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:41.920044   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:43.928585   61904 pod_ready.go:103] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:42.018082   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.018505   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.018534   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.018465   63470 retry.go:31] will retry after 538.2428ms: waiting for machine to come up
	I0912 23:01:42.558175   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:42.558612   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:42.558649   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:42.558579   63470 retry.go:31] will retry after 653.024741ms: waiting for machine to come up
	I0912 23:01:43.213349   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:43.213963   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:43.213991   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:43.213926   63470 retry.go:31] will retry after 936.091256ms: waiting for machine to come up
	I0912 23:01:44.152459   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:44.152892   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:44.152931   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:44.152841   63470 retry.go:31] will retry after 947.677491ms: waiting for machine to come up
	I0912 23:01:45.102330   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:45.102777   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:45.102803   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:45.102730   63470 retry.go:31] will retry after 1.076341568s: waiting for machine to come up
	I0912 23:01:46.181138   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:46.181600   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:46.181659   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:46.181529   63470 retry.go:31] will retry after 1.256599307s: waiting for machine to come up
	I0912 23:01:45.728604   62386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.065648968s)
	I0912 23:01:45.728636   62386 crio.go:469] duration metric: took 3.065759694s to extract the tarball
	I0912 23:01:45.728646   62386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:01:45.770020   62386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:01:45.803238   62386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0912 23:01:45.803263   62386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:01:45.803356   62386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.803393   62386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.803411   62386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.803433   62386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.803482   62386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:45.803487   62386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0912 23:01:45.803358   62386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.803456   62386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805495   62386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:45.805522   62386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:45.805549   62386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:45.805538   62386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 23:01:45.805583   62386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:45.805500   62386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:45.805498   62386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:45.805503   62386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.036001   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 23:01:46.053248   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.053339   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.055973   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.070206   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.079999   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.109937   62386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 23:01:46.109989   62386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 23:01:46.110039   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.162798   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.224302   62386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0912 23:01:46.224345   62386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.224375   62386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0912 23:01:46.224392   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224413   62386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.224418   62386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0912 23:01:46.224452   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224451   62386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.224495   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.224510   62386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0912 23:01:46.224529   62386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.224551   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243459   62386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0912 23:01:46.243561   62386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.243584   62386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0912 23:01:46.243596   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.243648   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.243658   62386 ssh_runner.go:195] Run: which crictl
	I0912 23:01:46.243619   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.243504   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.243737   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.243786   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.347085   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.347138   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.347184   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.354548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.354623   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.354658   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.490548   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0912 23:01:46.490655   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.490664   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.519541   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 23:01:46.519572   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0912 23:01:46.519583   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0912 23:01:46.519631   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0912 23:01:46.650941   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0912 23:01:46.651102   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0912 23:01:46.651115   62386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0912 23:01:46.665864   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0912 23:01:46.669346   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 23:01:46.669393   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0912 23:01:46.669433   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0912 23:01:46.713909   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0912 23:01:46.713928   62386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0912 23:01:46.947952   62386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:47.093308   62386 cache_images.go:92] duration metric: took 1.29002863s to LoadCachedImages
	W0912 23:01:47.093414   62386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0912 23:01:47.093432   62386 kubeadm.go:934] updating node { 192.168.61.69 8443 v1.20.0 crio true true} ...
	I0912 23:01:47.093567   62386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642238 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:01:47.093677   62386 ssh_runner.go:195] Run: crio config
	I0912 23:01:47.140625   62386 cni.go:84] Creating CNI manager for ""
	I0912 23:01:47.140651   62386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:01:47.140665   62386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:01:47.140683   62386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.69 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642238 NodeName:old-k8s-version-642238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:01:47.140848   62386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642238"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:01:47.140918   62386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:01:47.151096   62386 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:01:47.151174   62386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:01:47.161100   62386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0912 23:01:47.178267   62386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:01:47.196468   62386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0912 23:01:47.215215   62386 ssh_runner.go:195] Run: grep 192.168.61.69	control-plane.minikube.internal$ /etc/hosts
	I0912 23:01:47.219835   62386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:01:47.234386   62386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:47.374152   62386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:01:47.394130   62386 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238 for IP: 192.168.61.69
	I0912 23:01:47.394155   62386 certs.go:194] generating shared ca certs ...
	I0912 23:01:47.394174   62386 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.394399   62386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:01:47.394459   62386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:01:47.394474   62386 certs.go:256] generating profile certs ...
	I0912 23:01:47.394591   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.key
	I0912 23:01:47.394663   62386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key.fcb0a37b
	I0912 23:01:47.394713   62386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key
	I0912 23:01:47.394881   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:01:47.394922   62386 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:01:47.394936   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:01:47.394980   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:01:47.395016   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:01:47.395050   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:01:47.395103   62386 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:47.396058   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:01:47.436356   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:01:47.470442   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:01:47.496440   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:01:47.522541   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:01:47.547406   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:01:47.575687   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:01:47.602110   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:01:47.628233   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:01:47.659161   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:01:47.698813   62386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:01:47.722494   62386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:01:47.739479   62386 ssh_runner.go:195] Run: openssl version
	I0912 23:01:47.745476   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:01:47.756396   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760904   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.760983   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:01:47.767122   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:01:47.778372   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:01:47.789359   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794138   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.794205   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:01:47.799780   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:01:47.810735   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:01:47.821361   62386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825785   62386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.825848   62386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:01:47.832591   62386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:01:47.844637   62386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:01:47.849313   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:01:47.855337   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:01:47.861492   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:01:47.868028   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:01:47.874215   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:01:47.880279   62386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:01:47.886478   62386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-642238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-642238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:01:47.886579   62386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:01:47.886665   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:47.929887   62386 cri.go:89] found id: ""
	I0912 23:01:47.929965   62386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:01:47.940988   62386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:01:47.941014   62386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:01:47.941071   62386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:01:47.951357   62386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:01:47.952314   62386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-642238" does not appear in /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:01:47.952929   62386 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-5891/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-642238" cluster setting kubeconfig missing "old-k8s-version-642238" context setting]
	I0912 23:01:47.953869   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:01:47.961244   62386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:01:47.973427   62386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.69
	I0912 23:01:47.973462   62386 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:01:47.973476   62386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:01:47.973530   62386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:01:48.008401   62386 cri.go:89] found id: ""
	I0912 23:01:48.008479   62386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:01:48.024605   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:01:48.034256   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:01:48.034282   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:01:48.034341   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:01:48.043468   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:01:48.043533   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:01:48.053241   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:01:48.062653   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:01:48.062728   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:01:48.073213   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.085060   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:01:48.085136   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:01:48.095722   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:01:48.105099   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:01:48.105169   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:01:48.114362   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:01:48.123856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.250258   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:48.824441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.045340   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.151009   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:01:49.245161   62386 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:01:49.245239   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:45.927266   61904 pod_ready.go:93] pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:45.927293   61904 pod_ready.go:82] duration metric: took 4.007240345s for pod "kube-apiserver-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:45.927307   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456083   61904 pod_ready.go:93] pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.456111   61904 pod_ready.go:82] duration metric: took 528.7947ms for pod "kube-controller-manager-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.456125   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461632   61904 pod_ready.go:93] pod "kube-proxy-fvbbq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.461659   61904 pod_ready.go:82] duration metric: took 5.526604ms for pod "kube-proxy-fvbbq" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.461673   61904 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467128   61904 pod_ready.go:93] pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace has status "Ready":"True"
	I0912 23:01:46.467160   61904 pod_ready.go:82] duration metric: took 5.477201ms for pod "kube-scheduler-embed-certs-378112" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:46.467174   61904 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	I0912 23:01:48.474736   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:50.474846   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:47.439687   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:47.440281   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:47.440312   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:47.440140   63470 retry.go:31] will retry after 1.600662248s: waiting for machine to come up
	I0912 23:01:49.042962   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:49.043536   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:49.043569   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:49.043481   63470 retry.go:31] will retry after 2.53148931s: waiting for machine to come up
	I0912 23:01:51.577526   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:51.578022   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:51.578139   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:51.577965   63470 retry.go:31] will retry after 2.603355474s: waiting for machine to come up
	I0912 23:01:49.745632   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.245841   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:50.746368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:51.745708   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.246143   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.745402   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.245790   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:53.745965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:54.246368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:52.973232   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.974788   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:54.183119   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:54.183702   62943 main.go:141] libmachine: (no-preload-380092) DBG | unable to find current IP address of domain no-preload-380092 in network mk-no-preload-380092
	I0912 23:01:54.183745   62943 main.go:141] libmachine: (no-preload-380092) DBG | I0912 23:01:54.183655   63470 retry.go:31] will retry after 2.867321114s: waiting for machine to come up
	I0912 23:01:58.698415   61354 start.go:364] duration metric: took 53.897667909s to acquireMachinesLock for "default-k8s-diff-port-702201"
	I0912 23:01:58.698489   61354 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:01:58.698499   61354 fix.go:54] fixHost starting: 
	I0912 23:01:58.698908   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:01:58.698938   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:01:58.716203   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0912 23:01:58.716658   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:01:58.717117   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:01:58.717141   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:01:58.717489   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:01:58.717717   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:01:58.717873   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:01:58.719787   61354 fix.go:112] recreateIfNeeded on default-k8s-diff-port-702201: state=Stopped err=<nil>
	I0912 23:01:58.719810   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	W0912 23:01:58.719957   61354 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:01:58.723531   61354 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-702201" ...
	I0912 23:01:54.745915   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.245740   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:55.745435   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.245679   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:56.745309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.246032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.745362   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.245409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:58.745470   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:59.245307   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:57.052229   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052788   62943 main.go:141] libmachine: (no-preload-380092) Found IP for machine: 192.168.50.253
	I0912 23:01:57.052816   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has current primary IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.052822   62943 main.go:141] libmachine: (no-preload-380092) Reserving static IP address...
	I0912 23:01:57.053251   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.053275   62943 main.go:141] libmachine: (no-preload-380092) Reserved static IP address: 192.168.50.253
	I0912 23:01:57.053285   62943 main.go:141] libmachine: (no-preload-380092) DBG | skip adding static IP to network mk-no-preload-380092 - found existing host DHCP lease matching {name: "no-preload-380092", mac: "52:54:00:d6:80:d3", ip: "192.168.50.253"}
	I0912 23:01:57.053299   62943 main.go:141] libmachine: (no-preload-380092) DBG | Getting to WaitForSSH function...
	I0912 23:01:57.053330   62943 main.go:141] libmachine: (no-preload-380092) Waiting for SSH to be available...
	I0912 23:01:57.055927   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056326   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.056407   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.056569   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH client type: external
	I0912 23:01:57.056583   62943 main.go:141] libmachine: (no-preload-380092) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa (-rw-------)
	I0912 23:01:57.056610   62943 main.go:141] libmachine: (no-preload-380092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:01:57.056622   62943 main.go:141] libmachine: (no-preload-380092) DBG | About to run SSH command:
	I0912 23:01:57.056631   62943 main.go:141] libmachine: (no-preload-380092) DBG | exit 0
	I0912 23:01:57.181479   62943 main.go:141] libmachine: (no-preload-380092) DBG | SSH cmd err, output: <nil>: 
	I0912 23:01:57.181842   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetConfigRaw
	I0912 23:01:57.182453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.185257   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185670   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.185709   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.185982   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/config.json ...
	I0912 23:01:57.186232   62943 machine.go:93] provisionDockerMachine start ...
	I0912 23:01:57.186254   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:57.186468   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.188948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189336   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.189385   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.189533   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.189705   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189834   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.189954   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.190111   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.190349   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.190367   62943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:01:57.293765   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:01:57.293791   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294045   62943 buildroot.go:166] provisioning hostname "no-preload-380092"
	I0912 23:01:57.294078   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.294327   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.297031   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297414   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.297437   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.297661   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.297840   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298018   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.298210   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.298412   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.298635   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.298655   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-380092 && echo "no-preload-380092" | sudo tee /etc/hostname
	I0912 23:01:57.421188   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-380092
	
	I0912 23:01:57.421215   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.424496   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.424928   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.424965   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.425156   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:57.425396   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425591   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:57.425761   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:57.425948   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:57.426157   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:57.426183   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-380092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-380092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-380092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:01:57.537580   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:01:57.537607   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:01:57.537674   62943 buildroot.go:174] setting up certificates
	I0912 23:01:57.537683   62943 provision.go:84] configureAuth start
	I0912 23:01:57.537694   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetMachineName
	I0912 23:01:57.537951   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:57.540791   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541288   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.541315   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.541519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:57.544027   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544410   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:57.544430   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:57.544605   62943 provision.go:143] copyHostCerts
	I0912 23:01:57.544677   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:01:57.544694   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:01:57.544757   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:01:57.544880   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:01:57.544892   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:01:57.544919   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:01:57.545011   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:01:57.545020   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:01:57.545048   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:01:57.545127   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.no-preload-380092 san=[127.0.0.1 192.168.50.253 localhost minikube no-preload-380092]
	I0912 23:01:58.077226   62943 provision.go:177] copyRemoteCerts
	I0912 23:01:58.077299   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:01:58.077350   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.080045   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080404   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.080433   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.080691   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.080930   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.081101   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.081281   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.164075   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:01:58.188273   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:01:58.211076   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:01:58.233745   62943 provision.go:87] duration metric: took 695.915392ms to configureAuth
	I0912 23:01:58.233788   62943 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:01:58.233964   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:01:58.234061   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.236576   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.236915   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.236948   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.237165   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.237453   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237666   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.237848   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.238014   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.238172   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.238187   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:01:58.461160   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:01:58.461185   62943 machine.go:96] duration metric: took 1.274940476s to provisionDockerMachine
	I0912 23:01:58.461196   62943 start.go:293] postStartSetup for "no-preload-380092" (driver="kvm2")
	I0912 23:01:58.461206   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:01:58.461220   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.461531   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:01:58.461560   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.464374   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.464862   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.464892   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.465044   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.465280   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.465462   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.465639   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.553080   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:01:58.557294   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:01:58.557319   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:01:58.557395   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:01:58.557494   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:01:58.557647   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:01:58.566823   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:01:58.590357   62943 start.go:296] duration metric: took 129.147272ms for postStartSetup
	I0912 23:01:58.590401   62943 fix.go:56] duration metric: took 19.008109979s for fixHost
	I0912 23:01:58.590425   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.593131   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593490   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.593519   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.593693   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.593894   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594075   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.594242   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.594415   62943 main.go:141] libmachine: Using SSH client type: native
	I0912 23:01:58.594612   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.50.253 22 <nil> <nil>}
	I0912 23:01:58.594625   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:01:58.698233   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182118.655051061
	
	I0912 23:01:58.698261   62943 fix.go:216] guest clock: 1726182118.655051061
	I0912 23:01:58.698271   62943 fix.go:229] Guest: 2024-09-12 23:01:58.655051061 +0000 UTC Remote: 2024-09-12 23:01:58.590406505 +0000 UTC m=+96.733899188 (delta=64.644556ms)
	I0912 23:01:58.698327   62943 fix.go:200] guest clock delta is within tolerance: 64.644556ms
	I0912 23:01:58.698333   62943 start.go:83] releasing machines lock for "no-preload-380092", held for 19.116080043s
	I0912 23:01:58.698358   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.698635   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:01:58.701676   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702052   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.702088   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.702329   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.702865   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703120   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:01:58.703279   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:01:58.703337   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.703392   62943 ssh_runner.go:195] Run: cat /version.json
	I0912 23:01:58.703419   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:01:58.706149   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706381   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706704   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706773   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706785   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.706804   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:01:58.706831   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:01:58.706976   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707009   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:01:58.707142   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707308   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.707323   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:01:58.707505   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:01:58.707644   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:01:58.822704   62943 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:58.828592   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:01:58.970413   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:01:58.976303   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:01:58.976384   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:01:58.991593   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:01:58.991628   62943 start.go:495] detecting cgroup driver to use...
	I0912 23:01:58.991695   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:01:59.007839   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:01:59.021107   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:01:59.021176   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:01:59.038570   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:01:59.055392   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:01:59.183649   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:01:59.364825   62943 docker.go:233] disabling docker service ...
	I0912 23:01:59.364889   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:01:59.382320   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:01:59.397405   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:01:59.528989   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:01:59.653994   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:01:59.671437   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:01:59.693024   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:01:59.693088   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.704385   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:01:59.704451   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.715304   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.726058   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.736746   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:01:59.749178   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.761776   62943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.779863   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:01:59.790713   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:01:59.801023   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:01:59.801093   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:01:59.815237   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:01:59.825967   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:01:59.952175   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:00.050201   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:00.050334   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:00.055275   62943 start.go:563] Will wait 60s for crictl version
	I0912 23:02:00.055338   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.060075   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:00.100842   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:00.100932   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.127399   62943 ssh_runner.go:195] Run: crio --version
	I0912 23:02:00.161143   62943 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:01:57.474156   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:01:59.474331   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:00.162519   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetIP
	I0912 23:02:00.165323   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.165776   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:02:00.165806   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:02:00.166046   62943 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:00.170494   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:00.186142   62943 kubeadm.go:883] updating cluster {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:00.186296   62943 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:00.186348   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:00.221527   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:00.221550   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 23:02:00.221607   62943 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.221619   62943 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.221679   62943 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0912 23:02:00.221699   62943 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.221661   62943 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.221763   62943 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223203   62943 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0912 23:02:00.223215   62943 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:00.223269   62943 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.223278   62943 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.223286   62943 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.223208   62943 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.223363   62943 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.223381   62943 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.451698   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.459278   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.459739   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.463935   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.464136   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.468507   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.503388   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0912 23:02:00.536792   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0912 23:02:00.536840   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.536897   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.599938   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0912 23:02:00.599985   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.600030   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683783   62943 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0912 23:02:00.683826   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.683852   62943 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0912 23:02:00.683872   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683883   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0912 23:02:00.683908   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.683939   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.683950   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0912 23:02:00.683886   62943 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.683984   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.684075   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.684008   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:00.736368   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.736438   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.736522   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.736549   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.736597   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.736620   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864642   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.864677   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.864802   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:00.864856   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:00.869964   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:00.869998   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:00.996762   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0912 23:02:00.999239   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0912 23:02:01.000760   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0912 23:02:01.000846   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0912 23:02:01.000895   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0912 23:02:01.101860   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0912 23:02:01.102057   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.132743   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0912 23:02:01.132926   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:01.134809   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0912 23:02:01.134911   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:01.135089   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0912 23:02:01.135167   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:01.143459   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0912 23:02:01.143487   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0912 23:02:01.143503   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0912 23:02:01.143510   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143549   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:01.143584   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:01.147907   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0912 23:02:01.147935   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0912 23:02:01.148079   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0912 23:02:01.312549   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:01:58.724795   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Start
	I0912 23:01:58.724966   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring networks are active...
	I0912 23:01:58.725864   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network default is active
	I0912 23:01:58.726231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Ensuring network mk-default-k8s-diff-port-702201 is active
	I0912 23:01:58.726766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Getting domain xml...
	I0912 23:01:58.727695   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Creating domain...
	I0912 23:02:00.060410   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting to get IP...
	I0912 23:02:00.061559   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062006   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.062101   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.061997   63646 retry.go:31] will retry after 232.302394ms: waiting for machine to come up
	I0912 23:02:00.295568   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296234   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.296288   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.296094   63646 retry.go:31] will retry after 304.721087ms: waiting for machine to come up
	I0912 23:02:00.602956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603436   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.603464   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.603396   63646 retry.go:31] will retry after 370.621505ms: waiting for machine to come up
	I0912 23:02:00.975924   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976418   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:00.976452   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:00.976376   63646 retry.go:31] will retry after 454.623859ms: waiting for machine to come up
	I0912 23:02:01.433257   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434024   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:01.434056   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:01.433971   63646 retry.go:31] will retry after 726.658127ms: waiting for machine to come up
	I0912 23:02:02.162016   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.162592   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.162501   63646 retry.go:31] will retry after 756.903624ms: waiting for machine to come up
	I0912 23:01:59.746112   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.246227   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:00.745742   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.245741   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.746355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.245345   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:02.745752   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.246089   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:03.745811   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:04.245382   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:01.474545   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.975249   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:03.307790   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.164213632s)
	I0912 23:02:03.307822   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0912 23:02:03.307845   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307869   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.164220532s)
	I0912 23:02:03.307903   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0912 23:02:03.307906   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0912 23:02:03.307944   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.164339277s)
	I0912 23:02:03.307963   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0912 23:02:03.307999   62943 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.995423487s)
	I0912 23:02:03.308043   62943 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0912 23:02:03.308076   62943 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:03.308128   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:02:03.312883   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.481118   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.173175236s)
	I0912 23:02:05.481159   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0912 23:02:05.481192   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481239   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.168321222s)
	I0912 23:02:05.481245   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0912 23:02:05.481303   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:05.516667   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:02:02.921557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922010   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:02.922036   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:02.921968   63646 retry.go:31] will retry after 850.274218ms: waiting for machine to come up
	I0912 23:02:03.774125   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774603   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:03.774637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:03.774549   63646 retry.go:31] will retry after 1.117484339s: waiting for machine to come up
	I0912 23:02:04.893960   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894645   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:04.894671   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:04.894572   63646 retry.go:31] will retry after 1.705444912s: waiting for machine to come up
	I0912 23:02:06.602765   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:06.603371   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:06.603270   63646 retry.go:31] will retry after 2.06008552s: waiting for machine to come up
	I0912 23:02:04.745649   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.245909   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:05.745777   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.245432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.745472   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.245763   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:07.745416   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.245886   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:08.745493   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:09.246056   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:06.474009   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:08.474804   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:07.476441   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.995147485s)
	I0912 23:02:07.476474   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0912 23:02:07.476497   62943 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476545   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0912 23:02:07.476556   62943 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.959857575s)
	I0912 23:02:07.476602   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 23:02:07.476685   62943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:09.332759   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.856180957s)
	I0912 23:02:09.332804   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0912 23:02:09.332853   62943 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332762   62943 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.856053866s)
	I0912 23:02:09.332909   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0912 23:02:09.332947   62943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0912 23:02:11.397888   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.064939833s)
	I0912 23:02:11.397926   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0912 23:02:11.397954   62943 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:11.397992   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0912 23:02:08.665520   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666071   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:08.666102   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:08.666014   63646 retry.go:31] will retry after 2.158544571s: waiting for machine to come up
	I0912 23:02:10.826850   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827354   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:10.827382   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:10.827290   63646 retry.go:31] will retry after 3.518596305s: waiting for machine to come up
	I0912 23:02:09.746171   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.246283   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.745675   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.245560   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:11.745384   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.245631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:12.745749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:13.745849   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:14.245391   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:10.975044   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:13.473831   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:15.474321   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:14.664970   62943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.266950326s)
	I0912 23:02:14.665018   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0912 23:02:14.665063   62943 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:14.665138   62943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0912 23:02:15.516503   62943 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19616-5891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0912 23:02:15.516549   62943 cache_images.go:123] Successfully loaded all cached images
	I0912 23:02:15.516556   62943 cache_images.go:92] duration metric: took 15.294994067s to LoadCachedImages
	I0912 23:02:15.516574   62943 kubeadm.go:934] updating node { 192.168.50.253 8443 v1.31.1 crio true true} ...
	I0912 23:02:15.516716   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-380092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:15.516811   62943 ssh_runner.go:195] Run: crio config
	I0912 23:02:15.570588   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:02:15.570610   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:15.570621   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:15.570649   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-380092 NodeName:no-preload-380092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:15.570809   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-380092"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:15.570887   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:15.581208   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:15.581272   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:15.590463   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 23:02:15.606240   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:15.621579   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0912 23:02:15.639566   62943 ssh_runner.go:195] Run: grep 192.168.50.253	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:15.643207   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:15.654813   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:15.767367   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:15.784468   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092 for IP: 192.168.50.253
	I0912 23:02:15.784500   62943 certs.go:194] generating shared ca certs ...
	I0912 23:02:15.784523   62943 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:15.784717   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:15.784811   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:15.784828   62943 certs.go:256] generating profile certs ...
	I0912 23:02:15.784946   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/client.key
	I0912 23:02:15.785034   62943 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key.718f72e7
	I0912 23:02:15.785092   62943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key
	I0912 23:02:15.785295   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:15.785345   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:15.785362   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:15.785407   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:15.785446   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:15.785485   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:15.785553   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:15.786473   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:15.832614   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:15.867891   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:15.899262   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:15.930427   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:02:15.970193   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:02:15.995317   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:16.019282   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/no-preload-380092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:16.042121   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:16.065744   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:16.088894   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:16.111041   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:16.127119   62943 ssh_runner.go:195] Run: openssl version
	I0912 23:02:16.132754   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:16.142933   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147311   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.147367   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:16.152734   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:16.163131   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:16.173390   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177785   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.177842   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:16.183047   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:16.192890   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:16.202818   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206815   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.206871   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:16.212049   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:16.222224   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:16.226504   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:16.232090   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:16.237380   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:16.243024   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:16.248333   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:16.258745   62943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:16.274068   62943 kubeadm.go:392] StartCluster: {Name:no-preload-380092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-380092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:16.274168   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:16.274216   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.323688   62943 cri.go:89] found id: ""
	I0912 23:02:16.323751   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:16.335130   62943 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:16.335152   62943 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:16.335192   62943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:16.346285   62943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:16.347271   62943 kubeconfig.go:125] found "no-preload-380092" server: "https://192.168.50.253:8443"
	I0912 23:02:16.349217   62943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:16.360266   62943 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.253
	I0912 23:02:16.360308   62943 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:16.360319   62943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:16.360361   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:16.398876   62943 cri.go:89] found id: ""
	I0912 23:02:16.398942   62943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:16.418893   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:16.430531   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:16.430558   62943 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:16.430602   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:02:16.441036   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:16.441093   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:16.452768   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:02:16.463317   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:16.463394   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:16.473412   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.482470   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:16.482530   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:16.494488   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:02:16.503873   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:16.503955   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:16.513052   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:16.522738   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:16.630286   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:14.347758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348342   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | unable to find current IP address of domain default-k8s-diff-port-702201 in network mk-default-k8s-diff-port-702201
	I0912 23:02:14.348365   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | I0912 23:02:14.348276   63646 retry.go:31] will retry after 2.993143621s: waiting for machine to come up
	I0912 23:02:14.745599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.245719   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:15.745787   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.245959   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:16.746271   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.245414   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.745343   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.246080   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.746025   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.245751   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:17.343758   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Found IP for machine: 192.168.39.214
	I0912 23:02:17.344443   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has current primary IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.344453   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserving static IP address...
	I0912 23:02:17.344817   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Reserved static IP address: 192.168.39.214
	I0912 23:02:17.344848   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.344857   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Waiting for SSH to be available...
	I0912 23:02:17.344886   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | skip adding static IP to network mk-default-k8s-diff-port-702201 - found existing host DHCP lease matching {name: "default-k8s-diff-port-702201", mac: "52:54:00:b4:fd:fb", ip: "192.168.39.214"}
	I0912 23:02:17.344903   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Getting to WaitForSSH function...
	I0912 23:02:17.347627   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348094   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.348128   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.348236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH client type: external
	I0912 23:02:17.348296   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Using SSH private key: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa (-rw-------)
	I0912 23:02:17.348330   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 23:02:17.348353   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | About to run SSH command:
	I0912 23:02:17.348363   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | exit 0
	I0912 23:02:17.474375   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | SSH cmd err, output: <nil>: 
	I0912 23:02:17.474757   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetConfigRaw
	I0912 23:02:17.475391   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.478041   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478557   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.478590   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.478791   61354 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/config.json ...
	I0912 23:02:17.479064   61354 machine.go:93] provisionDockerMachine start ...
	I0912 23:02:17.479087   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:17.479317   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.482167   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482584   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.482616   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.482805   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.482996   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483163   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.483287   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.483443   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.483653   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.483669   61354 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:02:17.590238   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0912 23:02:17.590267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590549   61354 buildroot.go:166] provisioning hostname "default-k8s-diff-port-702201"
	I0912 23:02:17.590588   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.590766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.593804   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594267   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.594320   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.594542   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.594761   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.594956   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.595111   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.595333   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.595575   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.595591   61354 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-702201 && echo "default-k8s-diff-port-702201" | sudo tee /etc/hostname
	I0912 23:02:17.720928   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-702201
	
	I0912 23:02:17.720961   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.724174   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724499   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.724522   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.724682   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.724847   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725026   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.725199   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.725350   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:17.725528   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:17.725550   61354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-702201' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-702201/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-702201' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:02:17.842216   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:02:17.842250   61354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19616-5891/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-5891/.minikube}
	I0912 23:02:17.842274   61354 buildroot.go:174] setting up certificates
	I0912 23:02:17.842289   61354 provision.go:84] configureAuth start
	I0912 23:02:17.842306   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetMachineName
	I0912 23:02:17.842597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:17.845935   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846372   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.846401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.846546   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.849376   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.849937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.849971   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.850152   61354 provision.go:143] copyHostCerts
	I0912 23:02:17.850232   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem, removing ...
	I0912 23:02:17.850253   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem
	I0912 23:02:17.850356   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/key.pem (1679 bytes)
	I0912 23:02:17.850448   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem, removing ...
	I0912 23:02:17.850457   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem
	I0912 23:02:17.850477   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/ca.pem (1082 bytes)
	I0912 23:02:17.850529   61354 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem, removing ...
	I0912 23:02:17.850537   61354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem
	I0912 23:02:17.850555   61354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-5891/.minikube/cert.pem (1123 bytes)
	I0912 23:02:17.850601   61354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-702201 san=[127.0.0.1 192.168.39.214 default-k8s-diff-port-702201 localhost minikube]
	I0912 23:02:17.911340   61354 provision.go:177] copyRemoteCerts
	I0912 23:02:17.911392   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:02:17.911413   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:17.914514   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.914937   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:17.914969   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:17.915250   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:17.915449   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:17.915648   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:17.915800   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.003351   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:02:18.032117   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 23:02:18.057665   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 23:02:18.084003   61354 provision.go:87] duration metric: took 241.697336ms to configureAuth
	I0912 23:02:18.084043   61354 buildroot.go:189] setting minikube options for container-runtime
	I0912 23:02:18.084256   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:02:18.084379   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.087408   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.087786   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.087813   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.088070   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.088263   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.088576   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.088706   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.088874   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.088893   61354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 23:02:18.308716   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 23:02:18.308743   61354 machine.go:96] duration metric: took 829.664034ms to provisionDockerMachine
	I0912 23:02:18.308753   61354 start.go:293] postStartSetup for "default-k8s-diff-port-702201" (driver="kvm2")
	I0912 23:02:18.308765   61354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:02:18.308780   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.309119   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:02:18.309156   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.311782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312112   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.312138   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.312258   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.312429   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.312562   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.312686   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.400164   61354 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:02:18.404437   61354 info.go:137] Remote host: Buildroot 2023.02.9
	I0912 23:02:18.404465   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/addons for local assets ...
	I0912 23:02:18.404539   61354 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5891/.minikube/files for local assets ...
	I0912 23:02:18.404634   61354 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem -> 130832.pem in /etc/ssl/certs
	I0912 23:02:18.404748   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:02:18.414148   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:18.438745   61354 start.go:296] duration metric: took 129.977307ms for postStartSetup
	I0912 23:02:18.438815   61354 fix.go:56] duration metric: took 19.740295621s for fixHost
	I0912 23:02:18.438839   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.441655   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442034   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.442063   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.442229   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.442424   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442637   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.442782   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.442983   61354 main.go:141] libmachine: Using SSH client type: native
	I0912 23:02:18.443140   61354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0912 23:02:18.443150   61354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 23:02:18.550399   61354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726182138.510495585
	
	I0912 23:02:18.550429   61354 fix.go:216] guest clock: 1726182138.510495585
	I0912 23:02:18.550460   61354 fix.go:229] Guest: 2024-09-12 23:02:18.510495585 +0000 UTC Remote: 2024-09-12 23:02:18.438824041 +0000 UTC m=+356.198385709 (delta=71.671544ms)
	I0912 23:02:18.550493   61354 fix.go:200] guest clock delta is within tolerance: 71.671544ms
	I0912 23:02:18.550501   61354 start.go:83] releasing machines lock for "default-k8s-diff-port-702201", held for 19.852037366s
	I0912 23:02:18.550549   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.550842   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:18.553957   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554416   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.554450   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.554624   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555224   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555446   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:02:18.555554   61354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:02:18.555597   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.555718   61354 ssh_runner.go:195] Run: cat /version.json
	I0912 23:02:18.555753   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:02:18.558797   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.558822   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559205   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559236   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559283   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:18.559300   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:18.559532   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559538   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:02:18.559735   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559736   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:02:18.559921   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560042   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:02:18.560109   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.560199   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:02:18.672716   61354 ssh_runner.go:195] Run: systemctl --version
	I0912 23:02:18.681305   61354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 23:02:18.833032   61354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 23:02:18.838723   61354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 23:02:18.838800   61354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:02:18.854769   61354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 23:02:18.854796   61354 start.go:495] detecting cgroup driver to use...
	I0912 23:02:18.854867   61354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 23:02:18.872157   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 23:02:18.887144   61354 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:02:18.887199   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:02:18.901811   61354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:02:18.920495   61354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:02:19.060252   61354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:02:19.211418   61354 docker.go:233] disabling docker service ...
	I0912 23:02:19.211492   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:02:19.226829   61354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:02:19.240390   61354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:02:19.398676   61354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:02:19.539078   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:02:19.552847   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:02:19.574121   61354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0912 23:02:19.574198   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.585231   61354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 23:02:19.585298   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.596560   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.606732   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.620125   61354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:02:19.635153   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.648779   61354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.666387   61354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 23:02:19.680339   61354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:02:19.693115   61354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 23:02:19.693193   61354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 23:02:19.710075   61354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:02:19.722305   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:19.855658   61354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 23:02:19.958871   61354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 23:02:19.958934   61354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 23:02:19.964103   61354 start.go:563] Will wait 60s for crictl version
	I0912 23:02:19.964174   61354 ssh_runner.go:195] Run: which crictl
	I0912 23:02:19.968265   61354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:02:20.006530   61354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0912 23:02:20.006608   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.034570   61354 ssh_runner.go:195] Run: crio --version
	I0912 23:02:20.065312   61354 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0912 23:02:17.474542   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:19.975107   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:17.616860   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.845456   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:17.916359   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:18.000828   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:18.000924   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:18.501381   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.001136   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:19.017346   62943 api_server.go:72] duration metric: took 1.016512434s to wait for apiserver process to appear ...
	I0912 23:02:19.017382   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:19.017453   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:20.066529   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetIP
	I0912 23:02:20.069310   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.069719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:02:20.069748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:02:20.070001   61354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 23:02:20.074059   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:20.085892   61354 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:02:20.086016   61354 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 23:02:20.086054   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:20.130495   61354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0912 23:02:20.130570   61354 ssh_runner.go:195] Run: which lz4
	I0912 23:02:20.134677   61354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 23:02:20.138918   61354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 23:02:20.138956   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0912 23:02:21.380259   61354 crio.go:462] duration metric: took 1.245620408s to copy over tarball
	I0912 23:02:21.380357   61354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 23:02:19.745707   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.246273   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:20.746109   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:21.745863   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.245390   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.745716   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.245475   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:23.746069   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.245487   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:22.474250   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.974136   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:24.018305   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:24.018354   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:23.453059   61354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.072658804s)
	I0912 23:02:23.453094   61354 crio.go:469] duration metric: took 2.072807363s to extract the tarball
	I0912 23:02:23.453102   61354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 23:02:23.492566   61354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:02:23.535129   61354 crio.go:514] all images are preloaded for cri-o runtime.
	I0912 23:02:23.535152   61354 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:02:23.535160   61354 kubeadm.go:934] updating node { 192.168.39.214 8444 v1.31.1 crio true true} ...
	I0912 23:02:23.535251   61354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-702201 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:02:23.535311   61354 ssh_runner.go:195] Run: crio config
	I0912 23:02:23.586110   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:23.586128   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:23.586137   61354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:02:23.586156   61354 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-702201 NodeName:default-k8s-diff-port-702201 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:02:23.586280   61354 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-702201"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:02:23.586337   61354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:02:23.595675   61354 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:02:23.595744   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:02:23.605126   61354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0912 23:02:23.621542   61354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:02:23.637919   61354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0912 23:02:23.654869   61354 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0912 23:02:23.658860   61354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:02:23.670648   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:02:23.787949   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:02:23.804668   61354 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201 for IP: 192.168.39.214
	I0912 23:02:23.804697   61354 certs.go:194] generating shared ca certs ...
	I0912 23:02:23.804718   61354 certs.go:226] acquiring lock for ca certs: {Name:mk5e19f2c2757edc3fcb6b43a39efee0885b349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:02:23.804937   61354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key
	I0912 23:02:23.804998   61354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key
	I0912 23:02:23.805012   61354 certs.go:256] generating profile certs ...
	I0912 23:02:23.805110   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.key
	I0912 23:02:23.805184   61354 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key.9ca3177b
	I0912 23:02:23.805231   61354 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key
	I0912 23:02:23.805379   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem (1338 bytes)
	W0912 23:02:23.805411   61354 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083_empty.pem, impossibly tiny 0 bytes
	I0912 23:02:23.805420   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:02:23.805449   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:02:23.805480   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:02:23.805519   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/certs/key.pem (1679 bytes)
	I0912 23:02:23.805574   61354 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem (1708 bytes)
	I0912 23:02:23.806196   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:02:23.834789   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:02:23.863030   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:02:23.890538   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:02:23.923946   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0912 23:02:23.952990   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:02:23.984025   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:02:24.013727   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:02:24.038060   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/ssl/certs/130832.pem --> /usr/share/ca-certificates/130832.pem (1708 bytes)
	I0912 23:02:24.061285   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:02:24.085128   61354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-5891/.minikube/certs/13083.pem --> /usr/share/ca-certificates/13083.pem (1338 bytes)
	I0912 23:02:24.110174   61354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:02:24.127185   61354 ssh_runner.go:195] Run: openssl version
	I0912 23:02:24.133215   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:02:24.144390   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149357   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.149432   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:02:24.155228   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:02:24.167254   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13083.pem && ln -fs /usr/share/ca-certificates/13083.pem /etc/ssl/certs/13083.pem"
	I0912 23:02:24.178264   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183163   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:47 /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.183216   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13083.pem
	I0912 23:02:24.188891   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13083.pem /etc/ssl/certs/51391683.0"
	I0912 23:02:24.199682   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130832.pem && ln -fs /usr/share/ca-certificates/130832.pem /etc/ssl/certs/130832.pem"
	I0912 23:02:24.210810   61354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215244   61354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:47 /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.215321   61354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130832.pem
	I0912 23:02:24.221160   61354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130832.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:02:24.232246   61354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:02:24.236796   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:02:24.243930   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:02:24.250402   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:02:24.256470   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:02:24.262495   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:02:24.268433   61354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:02:24.274410   61354 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-702201 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-702201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:02:24.274499   61354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 23:02:24.274574   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.315011   61354 cri.go:89] found id: ""
	I0912 23:02:24.315073   61354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:02:24.325319   61354 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:02:24.325341   61354 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:02:24.325384   61354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:02:24.335529   61354 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:02:24.336936   61354 kubeconfig.go:125] found "default-k8s-diff-port-702201" server: "https://192.168.39.214:8444"
	I0912 23:02:24.340116   61354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:02:24.350831   61354 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.214
	I0912 23:02:24.350869   61354 kubeadm.go:1160] stopping kube-system containers ...
	I0912 23:02:24.350883   61354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 23:02:24.350974   61354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:02:24.393329   61354 cri.go:89] found id: ""
	I0912 23:02:24.393405   61354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 23:02:24.410979   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:02:24.423185   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:02:24.423201   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:02:24.423243   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:02:24.434365   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:02:24.434424   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:02:24.444193   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:02:24.453990   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:02:24.454047   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:02:24.464493   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.475213   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:02:24.475290   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:02:24.484665   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:02:24.493882   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:02:24.493943   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:02:24.503337   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:02:24.513303   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:24.620334   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.379199   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.605374   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.689838   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:25.787873   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:02:25.787952   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.288869   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.788863   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:24.746085   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.245836   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:25.745805   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.246312   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:26.745772   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.245309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.745530   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.245792   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:28.745917   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:29.245542   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.474741   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.974093   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:29.019453   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:29.019501   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:27.288650   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.788577   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:27.803146   61354 api_server.go:72] duration metric: took 2.015269708s to wait for apiserver process to appear ...
	I0912 23:02:27.803175   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:02:27.803196   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:27.803838   61354 api_server.go:269] stopped: https://192.168.39.214:8444/healthz: Get "https://192.168.39.214:8444/healthz": dial tcp 192.168.39.214:8444: connect: connection refused
	I0912 23:02:28.304001   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.918251   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:30.918285   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:30.918300   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:30.985245   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:30.985276   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.303790   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.309221   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.309255   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:31.803907   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:31.808683   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:31.808708   61354 api_server.go:103] status: https://192.168.39.214:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:32.303720   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:02:32.309378   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:02:32.318177   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:02:32.318207   61354 api_server.go:131] duration metric: took 4.515025163s to wait for apiserver health ...
	I0912 23:02:32.318217   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:02:32.318225   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:02:32.319660   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:29.746186   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.245501   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:30.745636   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.245440   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.745457   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.246318   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:32.745369   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.246152   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:33.746183   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:34.245452   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:31.974622   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.473549   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.019784   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:34.019838   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:32.320695   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:02:32.338749   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:02:32.369921   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:02:32.385934   61354 system_pods.go:59] 8 kube-system pods found
	I0912 23:02:32.385966   61354 system_pods.go:61] "coredns-7c65d6cfc9-ffms7" [d341bfb6-115b-4a9b-8ee5-ac0f6e0cf97a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:02:32.385986   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [c0c55fa9-3c65-4299-a1bb-59a55585a525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:02:32.385996   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [bf79734c-4cbc-4924-9358-f0196b357303] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:02:32.386007   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [92a6ae59-ae75-4c08-a7dc-a77841be564b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:02:32.386019   61354 system_pods.go:61] "kube-proxy-x8hg2" [ef603b08-213d-4edb-85e6-e8b91f8fbbba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:02:32.386027   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [10021400-9446-46f6-aff0-e3eb3c0be96a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 23:02:32.386041   61354 system_pods.go:61] "metrics-server-6867b74b74-q5vlk" [d6719976-8c0c-444f-a1ea-dd3bdb0d5707] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:02:32.386051   61354 system_pods.go:61] "storage-provisioner" [6fdb298d-7e96-4cbb-b755-d866514e44b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:02:32.386063   61354 system_pods.go:74] duration metric: took 16.120876ms to wait for pod list to return data ...
	I0912 23:02:32.386074   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:02:32.391917   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:02:32.391949   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:02:32.391961   61354 node_conditions.go:105] duration metric: took 5.88075ms to run NodePressure ...
	I0912 23:02:32.391981   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:02:32.671906   61354 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677468   61354 kubeadm.go:739] kubelet initialised
	I0912 23:02:32.677494   61354 kubeadm.go:740] duration metric: took 5.561384ms waiting for restarted kubelet to initialise ...
	I0912 23:02:32.677503   61354 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:02:32.682823   61354 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:34.689536   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:36.689748   61354 pod_ready.go:103] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:34.746241   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.246108   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:35.746087   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.245732   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.745659   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.245760   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:37.746137   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.245355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:38.745905   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:39.246196   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:36.976523   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.473513   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.020907   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:39.020949   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.398775   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": read tcp 192.168.50.1:34338->192.168.50.253:8443: read: connection reset by peer
	I0912 23:02:39.518000   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:39.518572   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.018526   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:40.019085   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": dial tcp 192.168.50.253:8443: connect: connection refused
	I0912 23:02:40.518456   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:37.692070   61354 pod_ready.go:93] pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:37.692105   61354 pod_ready.go:82] duration metric: took 5.009256797s for pod "coredns-7c65d6cfc9-ffms7" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:37.692119   61354 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703004   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:39.703029   61354 pod_ready.go:82] duration metric: took 2.010902876s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:39.703038   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:41.709956   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:39.745643   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.245485   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:40.745582   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.245599   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.746339   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.246155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:42.746334   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.245368   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:43.745371   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:44.246050   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:41.473779   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:43.475011   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:45.519472   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:45.519513   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:44.210871   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.210896   61354 pod_ready.go:82] duration metric: took 4.507851295s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.210905   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216677   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.216698   61354 pod_ready.go:82] duration metric: took 5.785493ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.216708   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220720   61354 pod_ready.go:93] pod "kube-proxy-x8hg2" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:44.220744   61354 pod_ready.go:82] duration metric: took 4.031371ms for pod "kube-proxy-x8hg2" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.220753   61354 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727199   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:02:45.727226   61354 pod_ready.go:82] duration metric: took 1.506465715s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:45.727238   61354 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	I0912 23:02:44.746354   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.245964   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:45.745631   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.246314   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:46.745483   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.245554   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:47.746311   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.246160   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:48.745999   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:49.246000   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:49.246093   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:49.286022   62386 cri.go:89] found id: ""
	I0912 23:02:49.286052   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.286063   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:49.286070   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:49.286121   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:49.320469   62386 cri.go:89] found id: ""
	I0912 23:02:49.320508   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.320527   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:49.320535   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:49.320635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:45.973431   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:47.973882   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.974075   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:50.520522   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:50.520570   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:47.732861   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.735642   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:52.232946   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:49.355651   62386 cri.go:89] found id: ""
	I0912 23:02:49.355682   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.355694   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:49.355702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:49.355757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:49.387928   62386 cri.go:89] found id: ""
	I0912 23:02:49.387956   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.387966   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:49.387980   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:49.388042   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:49.421154   62386 cri.go:89] found id: ""
	I0912 23:02:49.421184   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.421192   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:49.421198   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:49.421258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:49.460122   62386 cri.go:89] found id: ""
	I0912 23:02:49.460147   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.460154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:49.460159   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:49.460204   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:49.493113   62386 cri.go:89] found id: ""
	I0912 23:02:49.493136   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.493144   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:49.493150   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:49.493196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:49.525750   62386 cri.go:89] found id: ""
	I0912 23:02:49.525773   62386 logs.go:276] 0 containers: []
	W0912 23:02:49.525780   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:49.525790   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:49.525800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:49.578720   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:49.578757   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:49.591483   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:49.591510   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:49.711769   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:49.711836   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:49.711854   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:49.792569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:49.792620   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.333723   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:52.346359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:52.346428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:52.379990   62386 cri.go:89] found id: ""
	I0912 23:02:52.380017   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.380025   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:52.380032   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:52.380089   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:52.413963   62386 cri.go:89] found id: ""
	I0912 23:02:52.413994   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.414002   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:52.414007   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:52.414064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:52.463982   62386 cri.go:89] found id: ""
	I0912 23:02:52.464012   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.464024   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:52.464031   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:52.464119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:52.497797   62386 cri.go:89] found id: ""
	I0912 23:02:52.497830   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.497840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:52.497848   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:52.497914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:52.531946   62386 cri.go:89] found id: ""
	I0912 23:02:52.531974   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.531982   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:52.531987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:52.532036   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:52.563802   62386 cri.go:89] found id: ""
	I0912 23:02:52.563837   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.563846   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:52.563859   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:52.563914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:52.597408   62386 cri.go:89] found id: ""
	I0912 23:02:52.597437   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.597447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:52.597457   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:52.597529   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:52.634991   62386 cri.go:89] found id: ""
	I0912 23:02:52.635026   62386 logs.go:276] 0 containers: []
	W0912 23:02:52.635037   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:52.635049   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:52.635061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:52.711072   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:52.711112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:52.755335   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:52.755359   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:52.806660   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:52.806694   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:52.819718   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:52.819751   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:52.897247   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:52.474466   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:54.974351   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.520831   62943 api_server.go:269] stopped: https://192.168.50.253:8443/healthz: Get "https://192.168.50.253:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0912 23:02:55.520879   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:54.233244   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:56.234057   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:55.398028   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:55.411839   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:55.411920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:55.446367   62386 cri.go:89] found id: ""
	I0912 23:02:55.446402   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.446414   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:55.446421   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:55.446489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:55.481672   62386 cri.go:89] found id: ""
	I0912 23:02:55.481696   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.481704   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:55.481709   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:55.481766   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:55.517577   62386 cri.go:89] found id: ""
	I0912 23:02:55.517628   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.517640   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:55.517651   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:55.517724   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:55.553526   62386 cri.go:89] found id: ""
	I0912 23:02:55.553554   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.553565   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:55.553572   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:55.553659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:55.585628   62386 cri.go:89] found id: ""
	I0912 23:02:55.585658   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.585666   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:55.585673   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:55.585729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:55.619504   62386 cri.go:89] found id: ""
	I0912 23:02:55.619529   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.619537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:55.619543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:55.619612   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:55.652478   62386 cri.go:89] found id: ""
	I0912 23:02:55.652505   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.652513   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:55.652519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:55.652571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:55.685336   62386 cri.go:89] found id: ""
	I0912 23:02:55.685367   62386 logs.go:276] 0 containers: []
	W0912 23:02:55.685378   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:55.685389   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:55.685405   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:55.766786   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:55.766820   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:55.805897   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:55.805921   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:55.858536   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:55.858578   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:55.872300   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:55.872330   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:55.940023   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.440335   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:02:58.454063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:02:58.454146   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:02:58.495390   62386 cri.go:89] found id: ""
	I0912 23:02:58.495418   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.495429   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:02:58.495436   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:02:58.495491   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:02:58.533323   62386 cri.go:89] found id: ""
	I0912 23:02:58.533361   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.533369   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:02:58.533374   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:02:58.533426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:02:58.570749   62386 cri.go:89] found id: ""
	I0912 23:02:58.570772   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.570779   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:02:58.570785   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:02:58.570838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:02:58.602812   62386 cri.go:89] found id: ""
	I0912 23:02:58.602841   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.602852   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:02:58.602861   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:02:58.602920   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:02:58.641837   62386 cri.go:89] found id: ""
	I0912 23:02:58.641868   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.641875   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:02:58.641881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:02:58.641951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:02:58.679411   62386 cri.go:89] found id: ""
	I0912 23:02:58.679437   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.679444   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:02:58.679449   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:02:58.679495   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:02:58.715666   62386 cri.go:89] found id: ""
	I0912 23:02:58.715693   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.715701   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:02:58.715707   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:02:58.715765   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:02:58.750345   62386 cri.go:89] found id: ""
	I0912 23:02:58.750367   62386 logs.go:276] 0 containers: []
	W0912 23:02:58.750375   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:02:58.750383   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:02:58.750395   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:02:58.803683   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:02:58.803722   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:02:58.819479   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:02:58.819512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:02:58.939708   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:02:58.939733   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:02:58.939752   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:02:59.031209   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:02:59.031241   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:02:58.535050   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.535080   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:58.535094   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:58.552759   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 23:02:58.552792   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 23:02:59.018401   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.026830   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.026861   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:02:59.518413   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:02:59.523435   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0912 23:02:59.523469   62943 api_server.go:103] status: https://192.168.50.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0912 23:03:00.018452   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:03:00.023786   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:03:00.033543   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:03:00.033575   62943 api_server.go:131] duration metric: took 41.016185943s to wait for apiserver health ...
	I0912 23:03:00.033585   62943 cni.go:84] Creating CNI manager for ""
	I0912 23:03:00.033595   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:03:00.035383   62943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:02:56.975435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:02:59.473968   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.036655   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:03:00.051876   62943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:03:00.082432   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:03:00.101427   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:03:00.101465   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 23:03:00.101477   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 23:03:00.101488   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 23:03:00.101498   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 23:03:00.101512   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 23:03:00.101518   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:03:00.101526   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:03:00.101537   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0912 23:03:00.101554   62943 system_pods.go:74] duration metric: took 19.092541ms to wait for pod list to return data ...
	I0912 23:03:00.101566   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:03:00.105149   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:03:00.105183   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:03:00.105197   62943 node_conditions.go:105] duration metric: took 3.62458ms to run NodePressure ...
	I0912 23:03:00.105218   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 23:03:00.583613   62943 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0912 23:03:00.588976   62943 kubeadm.go:739] kubelet initialised
	I0912 23:03:00.589000   62943 kubeadm.go:740] duration metric: took 5.359605ms waiting for restarted kubelet to initialise ...
	I0912 23:03:00.589010   62943 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:00.598717   62943 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.619126   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619153   62943 pod_ready.go:82] duration metric: took 20.405609ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.619162   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.619169   62943 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.628727   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628766   62943 pod_ready.go:82] duration metric: took 9.588722ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.628778   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "etcd-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.628786   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.638502   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638531   62943 pod_ready.go:82] duration metric: took 9.737333ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.638545   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-apiserver-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.638554   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.644886   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644917   62943 pod_ready.go:82] duration metric: took 6.353295ms for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.644928   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.644936   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:00.987565   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987592   62943 pod_ready.go:82] duration metric: took 342.646574ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:00.987605   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-proxy-z4rcx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:00.987614   62943 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.386942   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386970   62943 pod_ready.go:82] duration metric: took 399.349066ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.386983   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "kube-scheduler-no-preload-380092" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.386991   62943 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:01.787866   62943 pod_ready.go:98] node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787897   62943 pod_ready.go:82] duration metric: took 400.896489ms for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:03:01.787906   62943 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-380092" hosting pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:01.787913   62943 pod_ready.go:39] duration metric: took 1.198893167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:01.787929   62943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:03:01.803486   62943 ops.go:34] apiserver oom_adj: -16
	I0912 23:03:01.803507   62943 kubeadm.go:597] duration metric: took 45.468348317s to restartPrimaryControlPlane
	I0912 23:03:01.803518   62943 kubeadm.go:394] duration metric: took 45.529458545s to StartCluster
	I0912 23:03:01.803533   62943 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.803615   62943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:03:01.806430   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:03:01.806730   62943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:03:01.806804   62943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:03:01.806874   62943 addons.go:69] Setting storage-provisioner=true in profile "no-preload-380092"
	I0912 23:03:01.806898   62943 addons.go:69] Setting default-storageclass=true in profile "no-preload-380092"
	I0912 23:03:01.806914   62943 addons.go:69] Setting metrics-server=true in profile "no-preload-380092"
	I0912 23:03:01.806932   62943 addons.go:234] Setting addon metrics-server=true in "no-preload-380092"
	I0912 23:03:01.806937   62943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-380092"
	W0912 23:03:01.806944   62943 addons.go:243] addon metrics-server should already be in state true
	I0912 23:03:01.806948   62943 config.go:182] Loaded profile config "no-preload-380092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 23:03:01.806978   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.806909   62943 addons.go:234] Setting addon storage-provisioner=true in "no-preload-380092"
	W0912 23:03:01.806995   62943 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:03:01.807018   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.807284   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807301   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807309   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807349   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.807363   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.807373   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.809540   62943 out.go:177] * Verifying Kubernetes components...
	I0912 23:03:01.810843   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:03:01.824985   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0912 23:03:01.825219   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0912 23:03:01.825700   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826207   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.826562   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826586   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826737   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.826759   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.826970   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827047   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0912 23:03:01.827219   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.827623   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827668   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827724   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.827752   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.827946   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.828629   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.828652   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.829143   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.829336   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.833298   62943 addons.go:234] Setting addon default-storageclass=true in "no-preload-380092"
	W0912 23:03:01.833320   62943 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:03:01.833348   62943 host.go:66] Checking if "no-preload-380092" exists ...
	I0912 23:03:01.833737   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.833768   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.847465   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0912 23:03:01.848132   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.848218   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0912 23:03:01.848635   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.849006   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849185   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.849197   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.849589   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.849756   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0912 23:03:01.849909   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.850287   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.850375   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.850446   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.851043   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.851061   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.851397   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.851935   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.852036   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:03:01.852082   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:03:01.852907   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.854324   62943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:03:01.855272   62943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:03:01.856071   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:03:01.856092   62943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:03:01.856115   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.857163   62943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:01.857184   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:03:01.857206   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.861326   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861344   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.861874   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.861894   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862197   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862292   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.862588   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.862627   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.862668   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.862751   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.862900   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.862917   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.863057   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.863161   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:03:01.872673   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0912 23:03:01.873156   62943 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:03:01.873848   62943 main.go:141] libmachine: Using API Version  1
	I0912 23:03:01.873924   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:03:01.874438   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:03:01.874719   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetState
	I0912 23:03:01.876928   62943 main.go:141] libmachine: (no-preload-380092) Calling .DriverName
	I0912 23:03:01.877226   62943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:01.877252   62943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:03:01.877268   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHHostname
	I0912 23:03:01.880966   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881372   62943 main.go:141] libmachine: (no-preload-380092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:80:d3", ip: ""} in network mk-no-preload-380092: {Iface:virbr2 ExpiryTime:2024-09-13 00:01:50 +0000 UTC Type:0 Mac:52:54:00:d6:80:d3 Iaid: IPaddr:192.168.50.253 Prefix:24 Hostname:no-preload-380092 Clientid:01:52:54:00:d6:80:d3}
	I0912 23:03:01.881399   62943 main.go:141] libmachine: (no-preload-380092) DBG | domain no-preload-380092 has defined IP address 192.168.50.253 and MAC address 52:54:00:d6:80:d3 in network mk-no-preload-380092
	I0912 23:03:01.881915   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHPort
	I0912 23:03:01.885353   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHKeyPath
	I0912 23:03:01.885585   62943 main.go:141] libmachine: (no-preload-380092) Calling .GetSSHUsername
	I0912 23:03:01.885765   62943 sshutil.go:53] new ssh client: &{IP:192.168.50.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/no-preload-380092/id_rsa Username:docker}
	I0912 23:02:58.234446   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:00.235816   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:02.035632   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:03:02.065690   62943 node_ready.go:35] waiting up to 6m0s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:02.132250   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:03:02.148150   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:03:02.270629   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:03:02.270652   62943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:03:02.346093   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:03:02.346119   62943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:03:02.371110   62943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:02.371133   62943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:03:02.415856   62943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:03:03.287692   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.13950787s)
	I0912 23:03:03.287695   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155412179s)
	I0912 23:03:03.287752   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287756   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.287764   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.287769   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288100   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288115   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288124   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288130   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288252   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288270   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288293   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288297   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.288454   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.288321   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288507   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.288346   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.288671   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.288682   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.294958   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.294982   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.295233   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.295252   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.295254   62943 main.go:141] libmachine: (no-preload-380092) DBG | Closing plugin on server side
	I0912 23:03:03.492450   62943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076542284s)
	I0912 23:03:03.492503   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492516   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.492830   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.492855   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.492866   62943 main.go:141] libmachine: Making call to close driver server
	I0912 23:03:03.492885   62943 main.go:141] libmachine: (no-preload-380092) Calling .Close
	I0912 23:03:03.493108   62943 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:03:03.493121   62943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:03:03.493132   62943 addons.go:475] Verifying addon metrics-server=true in "no-preload-380092"
	I0912 23:03:03.495865   62943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:03:01.578409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:01.591929   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:01.592004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:01.626295   62386 cri.go:89] found id: ""
	I0912 23:03:01.626327   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.626339   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:01.626346   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:01.626406   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:01.660489   62386 cri.go:89] found id: ""
	I0912 23:03:01.660520   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.660543   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:01.660563   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:01.660618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:01.694378   62386 cri.go:89] found id: ""
	I0912 23:03:01.694401   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.694408   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:01.694414   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:01.694467   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:01.733170   62386 cri.go:89] found id: ""
	I0912 23:03:01.733202   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.733211   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:01.733237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:01.733307   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:01.766419   62386 cri.go:89] found id: ""
	I0912 23:03:01.766449   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.766457   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:01.766467   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:01.766530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:01.802964   62386 cri.go:89] found id: ""
	I0912 23:03:01.802988   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.802995   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:01.803001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:01.803047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:01.846231   62386 cri.go:89] found id: ""
	I0912 23:03:01.846257   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.846268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:01.846276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:01.846340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:01.889353   62386 cri.go:89] found id: ""
	I0912 23:03:01.889379   62386 logs.go:276] 0 containers: []
	W0912 23:03:01.889387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:01.889396   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:01.889407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:01.904850   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:01.904876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:01.986288   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:01.986311   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:01.986328   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:02.070616   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:02.070646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:02.111931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:02.111959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:01.474395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.974266   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:03.497285   62943 addons.go:510] duration metric: took 1.690482366s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:03:04.069715   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:06.070086   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:02.734363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.735355   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:07.235634   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:04.676429   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:04.689177   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:04.689240   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:04.721393   62386 cri.go:89] found id: ""
	I0912 23:03:04.721420   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.721431   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:04.721437   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:04.721494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:04.754239   62386 cri.go:89] found id: ""
	I0912 23:03:04.754270   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.754281   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:04.754288   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:04.754340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:04.787546   62386 cri.go:89] found id: ""
	I0912 23:03:04.787576   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.787590   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:04.787597   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:04.787657   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:04.821051   62386 cri.go:89] found id: ""
	I0912 23:03:04.821141   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.821151   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:04.821157   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:04.821210   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:04.853893   62386 cri.go:89] found id: ""
	I0912 23:03:04.853918   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.853928   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:04.853935   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:04.854013   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:04.887798   62386 cri.go:89] found id: ""
	I0912 23:03:04.887832   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.887843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:04.887850   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:04.887911   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:04.921562   62386 cri.go:89] found id: ""
	I0912 23:03:04.921587   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.921595   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:04.921600   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:04.921667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:04.956794   62386 cri.go:89] found id: ""
	I0912 23:03:04.956828   62386 logs.go:276] 0 containers: []
	W0912 23:03:04.956836   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:04.956845   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:04.956856   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:04.993926   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:04.993956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:05.045381   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:05.045425   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:05.058626   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:05.058665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:05.128158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:05.128187   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:05.128205   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:07.707336   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:07.720573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:07.720646   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:07.756694   62386 cri.go:89] found id: ""
	I0912 23:03:07.756716   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.756724   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:07.756730   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:07.756777   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:07.789255   62386 cri.go:89] found id: ""
	I0912 23:03:07.789286   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.789295   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:07.789318   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:07.789405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:07.822472   62386 cri.go:89] found id: ""
	I0912 23:03:07.822510   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.822525   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:07.822534   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:07.822594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:07.859070   62386 cri.go:89] found id: ""
	I0912 23:03:07.859102   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.859114   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:07.859122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:07.859190   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:07.895128   62386 cri.go:89] found id: ""
	I0912 23:03:07.895155   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.895163   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:07.895169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:07.895225   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:07.927397   62386 cri.go:89] found id: ""
	I0912 23:03:07.927425   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.927435   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:07.927442   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:07.927506   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:07.965500   62386 cri.go:89] found id: ""
	I0912 23:03:07.965534   62386 logs.go:276] 0 containers: []
	W0912 23:03:07.965546   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:07.965555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:07.965635   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:08.002921   62386 cri.go:89] found id: ""
	I0912 23:03:08.002952   62386 logs.go:276] 0 containers: []
	W0912 23:03:08.002964   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:08.002974   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:08.002989   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:08.054610   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:08.054646   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:08.071096   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:08.071127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:08.145573   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:08.145603   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:08.145641   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:08.232606   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:08.232639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:05.974395   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.473180   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.474725   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:08.076176   62943 node_ready.go:53] node "no-preload-380092" has status "Ready":"False"
	I0912 23:03:09.570274   62943 node_ready.go:49] node "no-preload-380092" has status "Ready":"True"
	I0912 23:03:09.570298   62943 node_ready.go:38] duration metric: took 7.504574956s for node "no-preload-380092" to be "Ready" ...
	I0912 23:03:09.570308   62943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:03:09.576111   62943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581239   62943 pod_ready.go:93] pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.581261   62943 pod_ready.go:82] duration metric: took 5.122813ms for pod "coredns-7c65d6cfc9-twck7" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.581277   62943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585918   62943 pod_ready.go:93] pod "etcd-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.585942   62943 pod_ready.go:82] duration metric: took 4.657444ms for pod "etcd-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.585951   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591114   62943 pod_ready.go:93] pod "kube-apiserver-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:09.591136   62943 pod_ready.go:82] duration metric: took 5.179585ms for pod "kube-apiserver-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:09.591145   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:11.598000   62943 pod_ready.go:103] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:09.734628   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.233572   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:10.770737   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:10.783728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:10.783803   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:10.818792   62386 cri.go:89] found id: ""
	I0912 23:03:10.818827   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.818839   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:10.818847   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:10.818913   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:10.851711   62386 cri.go:89] found id: ""
	I0912 23:03:10.851738   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.851750   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:10.851757   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:10.851817   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:10.886935   62386 cri.go:89] found id: ""
	I0912 23:03:10.886963   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.886973   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:10.886979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:10.887033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:10.923175   62386 cri.go:89] found id: ""
	I0912 23:03:10.923201   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.923208   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:10.923214   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:10.923261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:10.959865   62386 cri.go:89] found id: ""
	I0912 23:03:10.959890   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.959897   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:10.959902   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:10.959952   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:10.995049   62386 cri.go:89] found id: ""
	I0912 23:03:10.995079   62386 logs.go:276] 0 containers: []
	W0912 23:03:10.995090   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:10.995097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:10.995156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:11.030132   62386 cri.go:89] found id: ""
	I0912 23:03:11.030157   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.030166   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:11.030173   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:11.030242   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:11.062899   62386 cri.go:89] found id: ""
	I0912 23:03:11.062928   62386 logs.go:276] 0 containers: []
	W0912 23:03:11.062936   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:11.062945   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:11.062956   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:11.116511   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:11.116546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:11.131472   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:11.131504   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:11.202744   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:11.202765   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:11.202781   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:11.293973   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:11.294011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:13.833125   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:13.846624   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:13.846737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:13.881744   62386 cri.go:89] found id: ""
	I0912 23:03:13.881784   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.881794   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:13.881802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:13.881861   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:13.921678   62386 cri.go:89] found id: ""
	I0912 23:03:13.921703   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.921713   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:13.921719   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:13.921778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:13.960039   62386 cri.go:89] found id: ""
	I0912 23:03:13.960067   62386 logs.go:276] 0 containers: []
	W0912 23:03:13.960077   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:13.960084   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:13.960150   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:14.001255   62386 cri.go:89] found id: ""
	I0912 23:03:14.001281   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.001293   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:14.001318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:14.001374   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:14.037212   62386 cri.go:89] found id: ""
	I0912 23:03:14.037241   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.037252   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:14.037259   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:14.037319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:14.071538   62386 cri.go:89] found id: ""
	I0912 23:03:14.071574   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.071582   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:14.071588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:14.071639   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:14.105561   62386 cri.go:89] found id: ""
	I0912 23:03:14.105590   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.105598   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:14.105604   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:14.105682   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:14.139407   62386 cri.go:89] found id: ""
	I0912 23:03:14.139432   62386 logs.go:276] 0 containers: []
	W0912 23:03:14.139440   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:14.139449   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:14.139463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:14.195367   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:14.195402   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:14.208632   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:14.208656   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:14.283274   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:14.283292   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:14.283306   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:12.973716   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:15.473265   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:12.097813   62943 pod_ready.go:93] pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.097844   62943 pod_ready.go:82] duration metric: took 2.506691651s for pod "kube-controller-manager-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.097858   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102303   62943 pod_ready.go:93] pod "kube-proxy-z4rcx" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.102332   62943 pod_ready.go:82] duration metric: took 4.465993ms for pod "kube-proxy-z4rcx" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.102344   62943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370318   62943 pod_ready.go:93] pod "kube-scheduler-no-preload-380092" in "kube-system" namespace has status "Ready":"True"
	I0912 23:03:12.370342   62943 pod_ready.go:82] duration metric: took 267.990034ms for pod "kube-scheduler-no-preload-380092" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:12.370351   62943 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	I0912 23:03:14.377234   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.378403   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.234341   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:16.733799   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:14.361800   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:14.361839   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:16.900725   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:16.913987   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:16.914047   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:16.950481   62386 cri.go:89] found id: ""
	I0912 23:03:16.950505   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.950513   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:16.950518   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:16.950574   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:16.985928   62386 cri.go:89] found id: ""
	I0912 23:03:16.985955   62386 logs.go:276] 0 containers: []
	W0912 23:03:16.985964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:16.985969   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:16.986019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:17.022383   62386 cri.go:89] found id: ""
	I0912 23:03:17.022408   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.022419   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:17.022425   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:17.022483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:17.060621   62386 cri.go:89] found id: ""
	I0912 23:03:17.060646   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.060655   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:17.060661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:17.060714   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:17.093465   62386 cri.go:89] found id: ""
	I0912 23:03:17.093496   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.093507   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:17.093513   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:17.093562   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:17.127750   62386 cri.go:89] found id: ""
	I0912 23:03:17.127780   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.127790   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:17.127796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:17.127850   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:17.167000   62386 cri.go:89] found id: ""
	I0912 23:03:17.167033   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.167042   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:17.167051   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:17.167114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:17.201116   62386 cri.go:89] found id: ""
	I0912 23:03:17.201140   62386 logs.go:276] 0 containers: []
	W0912 23:03:17.201149   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:17.201160   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:17.201175   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:17.279890   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:17.279917   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:17.279930   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:17.362638   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:17.362682   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:17.402507   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:17.402538   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:17.456039   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:17.456072   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:17.473792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.973369   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:18.877668   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:20.879319   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.233574   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:21.233847   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:19.970539   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:19.984338   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:19.984442   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:20.019006   62386 cri.go:89] found id: ""
	I0912 23:03:20.019036   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.019047   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:20.019055   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:20.019115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:20.051600   62386 cri.go:89] found id: ""
	I0912 23:03:20.051626   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.051634   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:20.051640   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:20.051691   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:20.085770   62386 cri.go:89] found id: ""
	I0912 23:03:20.085792   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.085799   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:20.085804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:20.085852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:20.118453   62386 cri.go:89] found id: ""
	I0912 23:03:20.118482   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.118493   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:20.118501   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:20.118570   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:20.149794   62386 cri.go:89] found id: ""
	I0912 23:03:20.149824   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.149835   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:20.149842   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:20.149889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:20.187189   62386 cri.go:89] found id: ""
	I0912 23:03:20.187222   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.187233   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:20.187239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:20.187308   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:20.225488   62386 cri.go:89] found id: ""
	I0912 23:03:20.225517   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.225525   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:20.225531   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:20.225593   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:20.263430   62386 cri.go:89] found id: ""
	I0912 23:03:20.263599   62386 logs.go:276] 0 containers: []
	W0912 23:03:20.263618   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:20.263633   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:20.263651   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:20.317633   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:20.317669   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:20.331121   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:20.331146   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:20.409078   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:20.409102   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:20.409114   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:20.485192   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:20.485226   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.024366   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:23.036837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:23.036919   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:23.072034   62386 cri.go:89] found id: ""
	I0912 23:03:23.072068   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.072080   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:23.072087   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:23.072151   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:23.105917   62386 cri.go:89] found id: ""
	I0912 23:03:23.105942   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.105950   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:23.105956   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:23.106001   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:23.138601   62386 cri.go:89] found id: ""
	I0912 23:03:23.138631   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.138643   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:23.138650   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:23.138700   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:23.173543   62386 cri.go:89] found id: ""
	I0912 23:03:23.173584   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.173596   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:23.173606   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:23.173686   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:23.206143   62386 cri.go:89] found id: ""
	I0912 23:03:23.206171   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.206182   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:23.206189   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:23.206258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:23.241893   62386 cri.go:89] found id: ""
	I0912 23:03:23.241914   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.241921   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:23.241927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:23.241985   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:23.276885   62386 cri.go:89] found id: ""
	I0912 23:03:23.276937   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.276946   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:23.276953   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:23.277004   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:23.311719   62386 cri.go:89] found id: ""
	I0912 23:03:23.311744   62386 logs.go:276] 0 containers: []
	W0912 23:03:23.311752   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:23.311759   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:23.311772   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:23.351581   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:23.351614   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:23.406831   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:23.406868   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:23.420716   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:23.420748   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:23.491298   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:23.491332   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:23.491347   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:22.474320   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:24.974016   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.377977   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.876937   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:23.235471   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:25.733684   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:26.075754   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:26.088671   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:26.088746   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:26.123263   62386 cri.go:89] found id: ""
	I0912 23:03:26.123289   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.123298   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:26.123320   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:26.123380   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:26.156957   62386 cri.go:89] found id: ""
	I0912 23:03:26.156986   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.156997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:26.157004   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:26.157063   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:26.191697   62386 cri.go:89] found id: ""
	I0912 23:03:26.191749   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.191774   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:26.191782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:26.191841   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:26.223915   62386 cri.go:89] found id: ""
	I0912 23:03:26.223938   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.223945   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:26.223951   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:26.224011   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:26.256467   62386 cri.go:89] found id: ""
	I0912 23:03:26.256494   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.256505   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:26.256511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:26.256587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.288778   62386 cri.go:89] found id: ""
	I0912 23:03:26.288803   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.288811   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:26.288816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:26.288889   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:26.325717   62386 cri.go:89] found id: ""
	I0912 23:03:26.325745   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.325755   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:26.325762   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:26.325829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:26.359729   62386 cri.go:89] found id: ""
	I0912 23:03:26.359758   62386 logs.go:276] 0 containers: []
	W0912 23:03:26.359767   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:26.359780   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:26.359799   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:26.416414   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:26.416455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:26.430440   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:26.430478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:26.506980   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:26.507012   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:26.507043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:26.583797   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:26.583846   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:29.122222   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:29.135287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:29.135367   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:29.169020   62386 cri.go:89] found id: ""
	I0912 23:03:29.169043   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.169051   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:29.169061   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:29.169114   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:29.201789   62386 cri.go:89] found id: ""
	I0912 23:03:29.201816   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.201825   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:29.201831   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:29.201886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:29.237011   62386 cri.go:89] found id: ""
	I0912 23:03:29.237031   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.237038   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:29.237044   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:29.237100   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:29.275292   62386 cri.go:89] found id: ""
	I0912 23:03:29.275315   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.275322   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:29.275328   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:29.275391   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:29.311927   62386 cri.go:89] found id: ""
	I0912 23:03:29.311954   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.311961   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:29.311967   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:29.312020   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:26.974332   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.473816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.877800   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.378675   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:27.735811   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:30.233647   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.233706   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:29.351411   62386 cri.go:89] found id: ""
	I0912 23:03:29.351441   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.351452   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:29.351460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:29.351520   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:29.386655   62386 cri.go:89] found id: ""
	I0912 23:03:29.386683   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.386693   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:29.386700   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:29.386753   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:29.419722   62386 cri.go:89] found id: ""
	I0912 23:03:29.419752   62386 logs.go:276] 0 containers: []
	W0912 23:03:29.419762   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:29.419775   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:29.419789   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:29.474358   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:29.474396   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:29.488410   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:29.488437   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:29.554675   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:29.554701   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:29.554715   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:29.630647   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:29.630681   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.167614   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:32.180592   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:32.180669   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:32.213596   62386 cri.go:89] found id: ""
	I0912 23:03:32.213643   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.213655   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:32.213663   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:32.213723   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:32.246790   62386 cri.go:89] found id: ""
	I0912 23:03:32.246824   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.246836   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:32.246846   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:32.246910   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:32.289423   62386 cri.go:89] found id: ""
	I0912 23:03:32.289446   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.289454   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:32.289459   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:32.289515   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:32.321515   62386 cri.go:89] found id: ""
	I0912 23:03:32.321542   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.321555   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:32.321561   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:32.321637   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:32.354633   62386 cri.go:89] found id: ""
	I0912 23:03:32.354660   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.354670   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:32.354675   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:32.354734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:32.389692   62386 cri.go:89] found id: ""
	I0912 23:03:32.389717   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.389725   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:32.389730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:32.389782   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:32.423086   62386 cri.go:89] found id: ""
	I0912 23:03:32.423109   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.423115   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:32.423121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:32.423167   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:32.456145   62386 cri.go:89] found id: ""
	I0912 23:03:32.456173   62386 logs.go:276] 0 containers: []
	W0912 23:03:32.456184   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:32.456194   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:32.456213   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:32.468329   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:32.468354   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:32.535454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:32.535480   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:32.535495   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:32.615219   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:32.615256   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:32.655380   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:32.655407   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:31.473904   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:33.474104   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:32.876734   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.876831   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.877698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:34.732792   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:36.733997   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:35.209155   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:35.223993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:35.224074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:35.260226   62386 cri.go:89] found id: ""
	I0912 23:03:35.260257   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.260268   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:35.260275   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:35.260346   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:35.295762   62386 cri.go:89] found id: ""
	I0912 23:03:35.295790   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.295801   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:35.295808   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:35.295873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:35.329749   62386 cri.go:89] found id: ""
	I0912 23:03:35.329778   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.329789   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:35.329796   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:35.329855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:35.363051   62386 cri.go:89] found id: ""
	I0912 23:03:35.363082   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.363091   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:35.363098   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:35.363156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:35.399777   62386 cri.go:89] found id: ""
	I0912 23:03:35.399805   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.399816   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:35.399823   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:35.399882   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:35.436380   62386 cri.go:89] found id: ""
	I0912 23:03:35.436409   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.436419   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:35.436427   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:35.436489   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:35.474014   62386 cri.go:89] found id: ""
	I0912 23:03:35.474040   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.474050   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:35.474057   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:35.474115   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:35.514579   62386 cri.go:89] found id: ""
	I0912 23:03:35.514606   62386 logs.go:276] 0 containers: []
	W0912 23:03:35.514615   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:35.514625   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:35.514636   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:35.566626   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:35.566665   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:35.581394   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:35.581421   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:35.653434   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:35.653465   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:35.653477   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.732486   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:35.732525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.268409   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:38.281766   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:38.281833   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:38.315951   62386 cri.go:89] found id: ""
	I0912 23:03:38.315977   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.315987   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:38.315994   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:38.316053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:38.355249   62386 cri.go:89] found id: ""
	I0912 23:03:38.355279   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.355289   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:38.355296   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:38.355365   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:38.392754   62386 cri.go:89] found id: ""
	I0912 23:03:38.392777   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.392784   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:38.392790   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:38.392836   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:38.427406   62386 cri.go:89] found id: ""
	I0912 23:03:38.427434   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.427442   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:38.427447   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:38.427497   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:38.473523   62386 cri.go:89] found id: ""
	I0912 23:03:38.473551   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.473567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:38.473575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:38.473660   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:38.507184   62386 cri.go:89] found id: ""
	I0912 23:03:38.507217   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.507228   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:38.507235   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:38.507297   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:38.541325   62386 cri.go:89] found id: ""
	I0912 23:03:38.541357   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.541367   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:38.541374   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:38.541435   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:38.576839   62386 cri.go:89] found id: ""
	I0912 23:03:38.576866   62386 logs.go:276] 0 containers: []
	W0912 23:03:38.576877   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:38.576889   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:38.576906   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:38.613107   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:38.613138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:38.667256   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:38.667300   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:38.681179   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:38.681210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:38.750560   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:38.750584   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:38.750600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:35.974072   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:37.974920   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:40.473150   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:39.376361   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.378062   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:38.734402   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.233881   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:41.327862   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:41.340904   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:41.340967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:41.379282   62386 cri.go:89] found id: ""
	I0912 23:03:41.379301   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.379309   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:41.379316   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:41.379366   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:41.412915   62386 cri.go:89] found id: ""
	I0912 23:03:41.412940   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.412947   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:41.412954   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:41.413003   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:41.446824   62386 cri.go:89] found id: ""
	I0912 23:03:41.446851   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.446861   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:41.446868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:41.446929   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:41.483157   62386 cri.go:89] found id: ""
	I0912 23:03:41.483186   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.483194   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:41.483200   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:41.483258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:41.517751   62386 cri.go:89] found id: ""
	I0912 23:03:41.517783   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.517794   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:41.517801   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:41.517865   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:41.551665   62386 cri.go:89] found id: ""
	I0912 23:03:41.551692   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.551700   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:41.551706   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:41.551756   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:41.586401   62386 cri.go:89] found id: ""
	I0912 23:03:41.586437   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.586447   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:41.586455   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:41.586518   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:41.621764   62386 cri.go:89] found id: ""
	I0912 23:03:41.621788   62386 logs.go:276] 0 containers: []
	W0912 23:03:41.621796   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:41.621806   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:41.621821   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:41.703663   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:41.703708   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:41.741813   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:41.741838   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:41.794237   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:41.794276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:41.807194   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:41.807219   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:41.874328   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:42.973710   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.973792   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.877009   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:46.376468   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:43.234202   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:45.733192   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:44.374745   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:44.389334   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:44.389414   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:44.427163   62386 cri.go:89] found id: ""
	I0912 23:03:44.427193   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.427204   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:44.427214   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:44.427261   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:44.461483   62386 cri.go:89] found id: ""
	I0912 23:03:44.461516   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.461526   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:44.461539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:44.461603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:44.499529   62386 cri.go:89] found id: ""
	I0912 23:03:44.499557   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.499569   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:44.499576   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:44.499640   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:44.536827   62386 cri.go:89] found id: ""
	I0912 23:03:44.536859   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.536871   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:44.536878   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:44.536927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:44.574764   62386 cri.go:89] found id: ""
	I0912 23:03:44.574794   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.574802   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:44.574808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:44.574866   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:44.612491   62386 cri.go:89] found id: ""
	I0912 23:03:44.612524   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.612537   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:44.612545   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:44.612618   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:44.651419   62386 cri.go:89] found id: ""
	I0912 23:03:44.651449   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.651459   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:44.651466   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:44.651516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:44.686635   62386 cri.go:89] found id: ""
	I0912 23:03:44.686665   62386 logs.go:276] 0 containers: []
	W0912 23:03:44.686674   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:44.686681   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:44.686693   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:44.738906   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:44.738938   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:44.752485   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:44.752512   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:44.831175   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:44.831205   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:44.831222   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:44.917405   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:44.917442   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.466262   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:47.479701   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:47.479758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:47.514737   62386 cri.go:89] found id: ""
	I0912 23:03:47.514763   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.514770   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:47.514776   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:47.514828   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:47.551163   62386 cri.go:89] found id: ""
	I0912 23:03:47.551195   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.551207   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:47.551215   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:47.551276   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:47.585189   62386 cri.go:89] found id: ""
	I0912 23:03:47.585213   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.585221   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:47.585226   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:47.585284   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:47.619831   62386 cri.go:89] found id: ""
	I0912 23:03:47.619855   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.619863   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:47.619869   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:47.619914   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:47.652364   62386 cri.go:89] found id: ""
	I0912 23:03:47.652398   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.652409   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:47.652417   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:47.652478   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:47.686796   62386 cri.go:89] found id: ""
	I0912 23:03:47.686828   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.686837   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:47.686844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:47.686902   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:47.718735   62386 cri.go:89] found id: ""
	I0912 23:03:47.718758   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.718768   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:47.718776   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:47.718838   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:47.751880   62386 cri.go:89] found id: ""
	I0912 23:03:47.751917   62386 logs.go:276] 0 containers: []
	W0912 23:03:47.751929   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:47.751940   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:47.751972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:47.821972   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:47.821995   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:47.822011   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:47.914569   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:47.914606   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:47.952931   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:47.952959   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:48.006294   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:48.006336   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:47.472805   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:49.474941   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:48.377557   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.877244   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:47.734734   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.233681   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:50.521664   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:50.535244   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:50.535319   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:50.572459   62386 cri.go:89] found id: ""
	I0912 23:03:50.572489   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.572497   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:50.572504   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:50.572560   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:50.613752   62386 cri.go:89] found id: ""
	I0912 23:03:50.613784   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.613793   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:50.613800   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:50.613859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:50.669798   62386 cri.go:89] found id: ""
	I0912 23:03:50.669829   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.669840   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:50.669845   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:50.669970   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:50.703629   62386 cri.go:89] found id: ""
	I0912 23:03:50.703669   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.703682   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:50.703691   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:50.703752   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:50.743683   62386 cri.go:89] found id: ""
	I0912 23:03:50.743710   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.743720   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:50.743728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:50.743784   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:50.776387   62386 cri.go:89] found id: ""
	I0912 23:03:50.776416   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.776428   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:50.776437   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:50.776494   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:50.810778   62386 cri.go:89] found id: ""
	I0912 23:03:50.810805   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.810817   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:50.810825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:50.810892   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:50.842488   62386 cri.go:89] found id: ""
	I0912 23:03:50.842510   62386 logs.go:276] 0 containers: []
	W0912 23:03:50.842518   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:50.842526   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:50.842542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:50.895086   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:50.895124   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:50.908540   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:50.908586   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:50.976108   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:50.976138   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:50.976153   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:51.052291   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:51.052327   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:53.594005   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:53.606622   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:53.606706   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:53.641109   62386 cri.go:89] found id: ""
	I0912 23:03:53.641140   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.641151   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:53.641159   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:53.641214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:53.673336   62386 cri.go:89] found id: ""
	I0912 23:03:53.673358   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.673366   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:53.673371   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:53.673417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:53.707931   62386 cri.go:89] found id: ""
	I0912 23:03:53.707965   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.707975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:53.707982   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:53.708032   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:53.741801   62386 cri.go:89] found id: ""
	I0912 23:03:53.741832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.741840   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:53.741847   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:53.741898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:53.775491   62386 cri.go:89] found id: ""
	I0912 23:03:53.775517   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.775526   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:53.775533   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:53.775596   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:53.811802   62386 cri.go:89] found id: ""
	I0912 23:03:53.811832   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.811843   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:53.811851   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:53.811916   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:53.844901   62386 cri.go:89] found id: ""
	I0912 23:03:53.844926   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.844934   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:53.844939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:53.844989   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:53.878342   62386 cri.go:89] found id: ""
	I0912 23:03:53.878363   62386 logs.go:276] 0 containers: []
	W0912 23:03:53.878370   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:53.878377   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:53.878387   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:53.935010   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:53.935053   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:53.948443   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:53.948474   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:54.020155   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:54.020178   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:54.020192   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:54.097113   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:54.097154   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:51.974178   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.473802   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:53.376802   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:55.377267   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:52.733232   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:54.734448   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.734623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:56.633694   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:56.651731   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:56.651791   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:56.698155   62386 cri.go:89] found id: ""
	I0912 23:03:56.698184   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.698194   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:56.698202   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:56.698263   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:56.730291   62386 cri.go:89] found id: ""
	I0912 23:03:56.730322   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.730332   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:56.730340   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:56.730434   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:56.763099   62386 cri.go:89] found id: ""
	I0912 23:03:56.763123   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.763133   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:56.763140   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:56.763201   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:56.796744   62386 cri.go:89] found id: ""
	I0912 23:03:56.796770   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.796780   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:56.796787   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:56.796846   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:56.831809   62386 cri.go:89] found id: ""
	I0912 23:03:56.831839   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.831851   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:56.831858   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:56.831927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:56.867213   62386 cri.go:89] found id: ""
	I0912 23:03:56.867239   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.867246   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:56.867252   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:56.867332   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:56.907242   62386 cri.go:89] found id: ""
	I0912 23:03:56.907270   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.907279   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:56.907286   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:56.907399   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:56.941841   62386 cri.go:89] found id: ""
	I0912 23:03:56.941871   62386 logs.go:276] 0 containers: []
	W0912 23:03:56.941879   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:56.941888   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:56.941899   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:03:56.955468   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:03:56.955498   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:03:57.025069   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:03:57.025089   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:03:57.025101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:03:57.109543   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:03:57.109579   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:03:57.150908   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:03:57.150932   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:03:56.473964   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:58.974245   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:57.377540   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.878300   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.233419   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:01.733916   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:03:59.700564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:03:59.713097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:03:59.713175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:03:59.746662   62386 cri.go:89] found id: ""
	I0912 23:03:59.746684   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.746694   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:03:59.746702   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:03:59.746760   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:03:59.780100   62386 cri.go:89] found id: ""
	I0912 23:03:59.780127   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.780137   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:03:59.780144   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:03:59.780205   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:03:59.814073   62386 cri.go:89] found id: ""
	I0912 23:03:59.814103   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.814115   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:03:59.814122   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:03:59.814170   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:03:59.849832   62386 cri.go:89] found id: ""
	I0912 23:03:59.849860   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.849873   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:03:59.849881   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:03:59.849937   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:03:59.884644   62386 cri.go:89] found id: ""
	I0912 23:03:59.884674   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.884685   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:03:59.884692   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:03:59.884757   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:03:59.922575   62386 cri.go:89] found id: ""
	I0912 23:03:59.922601   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.922609   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:03:59.922615   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:03:59.922671   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:03:59.959405   62386 cri.go:89] found id: ""
	I0912 23:03:59.959454   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.959467   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:03:59.959503   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:03:59.959572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:03:59.992850   62386 cri.go:89] found id: ""
	I0912 23:03:59.992882   62386 logs.go:276] 0 containers: []
	W0912 23:03:59.992891   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:03:59.992898   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:03:59.992910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:00.007112   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:00.007147   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:00.077737   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:00.077762   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:00.077777   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:00.156823   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:00.156860   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:00.194294   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:00.194388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:02.746340   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:02.759723   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:02.759780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:02.795753   62386 cri.go:89] found id: ""
	I0912 23:04:02.795778   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.795787   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:02.795794   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:02.795849   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:02.830757   62386 cri.go:89] found id: ""
	I0912 23:04:02.830781   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.830790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:02.830797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:02.830859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:02.866266   62386 cri.go:89] found id: ""
	I0912 23:04:02.866301   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.866312   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:02.866319   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:02.866373   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:02.900332   62386 cri.go:89] found id: ""
	I0912 23:04:02.900359   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.900370   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:02.900377   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:02.900436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:02.937687   62386 cri.go:89] found id: ""
	I0912 23:04:02.937718   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.937729   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:02.937736   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:02.937806   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:02.972960   62386 cri.go:89] found id: ""
	I0912 23:04:02.972988   62386 logs.go:276] 0 containers: []
	W0912 23:04:02.972998   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:02.973006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:02.973067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:03.006621   62386 cri.go:89] found id: ""
	I0912 23:04:03.006649   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.006658   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:03.006663   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:03.006711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:03.042450   62386 cri.go:89] found id: ""
	I0912 23:04:03.042475   62386 logs.go:276] 0 containers: []
	W0912 23:04:03.042484   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:03.042501   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:03.042514   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:03.082657   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:03.082688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:03.136570   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:03.136605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:03.150359   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:03.150388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:03.217419   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:03.217440   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:03.217452   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:01.473231   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.474382   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.475943   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:02.376721   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:04.376797   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.377573   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:03.734198   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:06.234489   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:05.795553   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:05.808126   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:05.808197   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:05.841031   62386 cri.go:89] found id: ""
	I0912 23:04:05.841059   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.841071   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:05.841078   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:05.841137   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:05.875865   62386 cri.go:89] found id: ""
	I0912 23:04:05.875891   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.875903   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:05.875910   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:05.875971   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:05.911317   62386 cri.go:89] found id: ""
	I0912 23:04:05.911340   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.911361   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:05.911372   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:05.911433   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:05.946603   62386 cri.go:89] found id: ""
	I0912 23:04:05.946634   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.946645   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:05.946652   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:05.946707   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:05.982041   62386 cri.go:89] found id: ""
	I0912 23:04:05.982077   62386 logs.go:276] 0 containers: []
	W0912 23:04:05.982089   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:05.982099   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:05.982196   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:06.015777   62386 cri.go:89] found id: ""
	I0912 23:04:06.015808   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.015816   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:06.015822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:06.015870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:06.047613   62386 cri.go:89] found id: ""
	I0912 23:04:06.047642   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.047650   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:06.047656   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:06.047711   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:06.082817   62386 cri.go:89] found id: ""
	I0912 23:04:06.082855   62386 logs.go:276] 0 containers: []
	W0912 23:04:06.082863   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:06.082874   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:06.082889   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:06.148350   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:06.148370   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:06.148382   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:06.227819   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:06.227861   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:06.267783   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:06.267811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:06.319531   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:06.319567   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:08.833715   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:08.846391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:08.846457   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:08.882798   62386 cri.go:89] found id: ""
	I0912 23:04:08.882827   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.882834   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:08.882839   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:08.882885   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:08.919637   62386 cri.go:89] found id: ""
	I0912 23:04:08.919660   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.919669   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:08.919677   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:08.919737   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:08.957181   62386 cri.go:89] found id: ""
	I0912 23:04:08.957226   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.957235   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:08.957241   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:08.957300   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:08.994391   62386 cri.go:89] found id: ""
	I0912 23:04:08.994425   62386 logs.go:276] 0 containers: []
	W0912 23:04:08.994435   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:08.994450   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:08.994517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:09.026229   62386 cri.go:89] found id: ""
	I0912 23:04:09.026253   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.026261   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:09.026270   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:09.026331   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:09.063522   62386 cri.go:89] found id: ""
	I0912 23:04:09.063552   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.063562   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:09.063570   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:09.063633   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:09.095532   62386 cri.go:89] found id: ""
	I0912 23:04:09.095561   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.095571   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:09.095578   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:09.095638   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:09.129364   62386 cri.go:89] found id: ""
	I0912 23:04:09.129396   62386 logs.go:276] 0 containers: []
	W0912 23:04:09.129405   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:09.129416   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:09.129430   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:09.210628   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:09.210663   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:09.249058   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:09.249086   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:09.301317   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:09.301346   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:09.314691   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:09.314720   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:04:07.974160   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.473970   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.877389   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:11.376421   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:08.733271   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:10.737700   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	W0912 23:04:09.379506   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:11.879682   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:11.892758   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:11.892816   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:11.929514   62386 cri.go:89] found id: ""
	I0912 23:04:11.929560   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.929572   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:11.929580   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:11.929663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:11.972066   62386 cri.go:89] found id: ""
	I0912 23:04:11.972091   62386 logs.go:276] 0 containers: []
	W0912 23:04:11.972099   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:11.972104   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:11.972153   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:12.005454   62386 cri.go:89] found id: ""
	I0912 23:04:12.005483   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.005493   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:12.005500   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:12.005573   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:12.042189   62386 cri.go:89] found id: ""
	I0912 23:04:12.042221   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.042232   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:12.042239   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:12.042292   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:12.077239   62386 cri.go:89] found id: ""
	I0912 23:04:12.077268   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.077276   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:12.077282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:12.077340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:12.112573   62386 cri.go:89] found id: ""
	I0912 23:04:12.112602   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.112610   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:12.112616   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:12.112661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:12.147124   62386 cri.go:89] found id: ""
	I0912 23:04:12.147149   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.147157   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:12.147163   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:12.147224   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:12.182051   62386 cri.go:89] found id: ""
	I0912 23:04:12.182074   62386 logs.go:276] 0 containers: []
	W0912 23:04:12.182082   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:12.182090   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:12.182103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:12.238070   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:12.238103   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:12.250913   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:12.250937   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:12.315420   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:12.315448   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:12.315465   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:12.397338   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:12.397379   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:12.974531   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.479539   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.377855   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.379901   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:13.233099   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:15.234506   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:14.936982   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:14.949955   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:14.950019   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:14.993284   62386 cri.go:89] found id: ""
	I0912 23:04:14.993317   62386 logs.go:276] 0 containers: []
	W0912 23:04:14.993327   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:14.993356   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:14.993421   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:15.028310   62386 cri.go:89] found id: ""
	I0912 23:04:15.028338   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.028347   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:15.028352   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:15.028424   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:15.064436   62386 cri.go:89] found id: ""
	I0912 23:04:15.064472   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.064482   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:15.064490   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:15.064552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:15.101547   62386 cri.go:89] found id: ""
	I0912 23:04:15.101578   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.101587   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:15.101595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:15.101672   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:15.137534   62386 cri.go:89] found id: ""
	I0912 23:04:15.137559   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.137567   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:15.137575   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:15.137670   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:15.172549   62386 cri.go:89] found id: ""
	I0912 23:04:15.172581   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.172593   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:15.172601   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:15.172661   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:15.207894   62386 cri.go:89] found id: ""
	I0912 23:04:15.207921   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.207931   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:15.207939   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:15.207998   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:15.243684   62386 cri.go:89] found id: ""
	I0912 23:04:15.243713   62386 logs.go:276] 0 containers: []
	W0912 23:04:15.243724   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:15.243733   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:15.243744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:15.297907   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:15.297948   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:15.312119   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:15.312151   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:15.375781   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:15.375815   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:15.375830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:15.455792   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:15.455853   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:17.996749   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:18.009868   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:18.009927   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:18.048233   62386 cri.go:89] found id: ""
	I0912 23:04:18.048262   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.048273   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:18.048280   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:18.048340   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:18.082525   62386 cri.go:89] found id: ""
	I0912 23:04:18.082554   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.082565   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:18.082572   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:18.082634   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:18.117691   62386 cri.go:89] found id: ""
	I0912 23:04:18.117721   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.117731   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:18.117738   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:18.117799   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:18.151975   62386 cri.go:89] found id: ""
	I0912 23:04:18.152004   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.152013   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:18.152019   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:18.152073   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:18.187028   62386 cri.go:89] found id: ""
	I0912 23:04:18.187058   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.187069   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:18.187075   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:18.187127   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:18.221292   62386 cri.go:89] found id: ""
	I0912 23:04:18.221324   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.221331   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:18.221337   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:18.221383   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:18.255445   62386 cri.go:89] found id: ""
	I0912 23:04:18.255471   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.255479   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:18.255484   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:18.255533   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:18.289977   62386 cri.go:89] found id: ""
	I0912 23:04:18.290008   62386 logs.go:276] 0 containers: []
	W0912 23:04:18.290019   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:18.290030   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:18.290045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:18.303351   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:18.303380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:18.371085   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:18.371114   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:18.371128   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:18.448748   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:18.448791   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:18.490580   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:18.490605   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:17.973604   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.473541   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.878221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:20.377651   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:17.733784   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:19.734292   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.232832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:21.043479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:21.056774   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:21.056834   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:21.089410   62386 cri.go:89] found id: ""
	I0912 23:04:21.089435   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.089449   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:21.089460   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:21.089534   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:21.122922   62386 cri.go:89] found id: ""
	I0912 23:04:21.122954   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.122964   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:21.122971   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:21.123025   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:21.157877   62386 cri.go:89] found id: ""
	I0912 23:04:21.157900   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.157908   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:21.157914   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:21.157959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:21.190953   62386 cri.go:89] found id: ""
	I0912 23:04:21.190983   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.190994   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:21.191001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:21.191050   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:21.225211   62386 cri.go:89] found id: ""
	I0912 23:04:21.225241   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.225253   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:21.225260   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:21.225325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:21.262459   62386 cri.go:89] found id: ""
	I0912 23:04:21.262486   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.262497   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:21.262504   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:21.262578   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:21.296646   62386 cri.go:89] found id: ""
	I0912 23:04:21.296672   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.296682   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:21.296687   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:21.296734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:21.329911   62386 cri.go:89] found id: ""
	I0912 23:04:21.329933   62386 logs.go:276] 0 containers: []
	W0912 23:04:21.329939   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:21.329947   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:21.329958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:21.371014   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:21.371043   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:21.419638   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:21.419671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:21.433502   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:21.433533   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:21.502764   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:21.502787   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:21.502800   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.079800   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:24.094021   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:24.094099   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:24.128807   62386 cri.go:89] found id: ""
	I0912 23:04:24.128832   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.128844   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:24.128851   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:24.128915   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:24.166381   62386 cri.go:89] found id: ""
	I0912 23:04:24.166409   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.166416   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:24.166425   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:24.166481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:24.202656   62386 cri.go:89] found id: ""
	I0912 23:04:24.202684   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.202692   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:24.202699   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:24.202755   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:24.241177   62386 cri.go:89] found id: ""
	I0912 23:04:24.241204   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.241212   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:24.241218   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:24.241274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:24.278768   62386 cri.go:89] found id: ""
	I0912 23:04:24.278796   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.278806   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:24.278813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:24.278881   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:24.314429   62386 cri.go:89] found id: ""
	I0912 23:04:24.314456   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.314466   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:24.314474   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:24.314540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:22.972334   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.974435   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:22.877248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:25.376758   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.233814   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:26.733537   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:24.352300   62386 cri.go:89] found id: ""
	I0912 23:04:24.352344   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.352352   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:24.352357   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:24.352415   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:24.387465   62386 cri.go:89] found id: ""
	I0912 23:04:24.387496   62386 logs.go:276] 0 containers: []
	W0912 23:04:24.387503   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:24.387513   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:24.387526   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:24.437029   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:24.437061   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:24.450519   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:24.450555   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:24.516538   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:24.516566   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:24.516583   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:24.594321   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:24.594358   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.129976   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:27.142237   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:27.142293   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:27.173687   62386 cri.go:89] found id: ""
	I0912 23:04:27.173709   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.173716   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:27.173721   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:27.173778   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:27.206078   62386 cri.go:89] found id: ""
	I0912 23:04:27.206099   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.206107   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:27.206112   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:27.206156   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:27.238770   62386 cri.go:89] found id: ""
	I0912 23:04:27.238795   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.238803   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:27.238808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:27.238855   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:27.271230   62386 cri.go:89] found id: ""
	I0912 23:04:27.271262   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.271273   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:27.271281   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:27.271351   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:27.304232   62386 cri.go:89] found id: ""
	I0912 23:04:27.304261   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.304271   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:27.304278   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:27.304345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:27.337542   62386 cri.go:89] found id: ""
	I0912 23:04:27.337571   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.337586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:27.337595   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:27.337668   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:27.369971   62386 cri.go:89] found id: ""
	I0912 23:04:27.369997   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.370005   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:27.370012   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:27.370072   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:27.406844   62386 cri.go:89] found id: ""
	I0912 23:04:27.406868   62386 logs.go:276] 0 containers: []
	W0912 23:04:27.406875   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:27.406883   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:27.406894   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:27.493489   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:27.493524   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:27.530448   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:27.530481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:27.585706   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:27.585744   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:27.599144   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:27.599177   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:27.672585   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:27.473942   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.474058   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:27.376867   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.377474   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.877233   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:29.234068   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:31.733528   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:30.173309   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:30.187957   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:30.188037   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:30.226373   62386 cri.go:89] found id: ""
	I0912 23:04:30.226400   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.226407   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:30.226412   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:30.226469   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:30.257956   62386 cri.go:89] found id: ""
	I0912 23:04:30.257988   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.257997   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:30.258002   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:30.258053   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:30.291091   62386 cri.go:89] found id: ""
	I0912 23:04:30.291119   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.291127   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:30.291132   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:30.291181   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:30.323564   62386 cri.go:89] found id: ""
	I0912 23:04:30.323589   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.323597   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:30.323603   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:30.323652   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:30.361971   62386 cri.go:89] found id: ""
	I0912 23:04:30.361996   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.362005   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:30.362014   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:30.362081   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:30.396952   62386 cri.go:89] found id: ""
	I0912 23:04:30.396986   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.396996   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:30.397001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:30.397052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:30.453785   62386 cri.go:89] found id: ""
	I0912 23:04:30.453812   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.453820   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:30.453825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:30.453870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:30.494072   62386 cri.go:89] found id: ""
	I0912 23:04:30.494099   62386 logs.go:276] 0 containers: []
	W0912 23:04:30.494108   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:30.494115   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:30.494133   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:30.543153   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:30.543187   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:30.556204   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:30.556242   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:30.630856   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:30.630885   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:30.630902   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:30.710205   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:30.710239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:33.248218   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:33.261421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:33.261504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:33.295691   62386 cri.go:89] found id: ""
	I0912 23:04:33.295718   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.295729   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:33.295736   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:33.295796   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:33.328578   62386 cri.go:89] found id: ""
	I0912 23:04:33.328607   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.328618   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:33.328626   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:33.328743   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:33.367991   62386 cri.go:89] found id: ""
	I0912 23:04:33.368018   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.368034   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:33.368041   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:33.368101   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:33.402537   62386 cri.go:89] found id: ""
	I0912 23:04:33.402566   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.402578   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:33.402588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:33.402649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:33.437175   62386 cri.go:89] found id: ""
	I0912 23:04:33.437199   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.437206   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:33.437216   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:33.437275   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:33.475108   62386 cri.go:89] found id: ""
	I0912 23:04:33.475134   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.475144   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:33.475151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:33.475202   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:33.508612   62386 cri.go:89] found id: ""
	I0912 23:04:33.508649   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.508659   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:33.508664   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:33.508713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:33.543351   62386 cri.go:89] found id: ""
	I0912 23:04:33.543380   62386 logs.go:276] 0 containers: []
	W0912 23:04:33.543387   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:33.543395   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:33.543406   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:33.595649   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:33.595688   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:33.609181   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:33.609210   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:33.686761   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:33.686782   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:33.686796   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:33.767443   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:33.767478   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:31.474444   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.474510   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:34.376900   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.377015   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:33.734282   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.233730   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:36.310374   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:36.324182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:36.324260   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:36.359642   62386 cri.go:89] found id: ""
	I0912 23:04:36.359670   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.359677   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:36.359684   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:36.359744   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:36.392841   62386 cri.go:89] found id: ""
	I0912 23:04:36.392865   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.392874   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:36.392887   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:36.392951   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:36.430323   62386 cri.go:89] found id: ""
	I0912 23:04:36.430354   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.430365   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:36.430373   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:36.430436   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:36.466712   62386 cri.go:89] found id: ""
	I0912 23:04:36.466737   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.466745   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:36.466750   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:36.466808   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:36.502506   62386 cri.go:89] found id: ""
	I0912 23:04:36.502537   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.502548   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:36.502555   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:36.502624   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:36.536530   62386 cri.go:89] found id: ""
	I0912 23:04:36.536559   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.536569   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:36.536577   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:36.536648   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:36.570519   62386 cri.go:89] found id: ""
	I0912 23:04:36.570555   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.570565   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:36.570573   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:36.570631   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:36.606107   62386 cri.go:89] found id: ""
	I0912 23:04:36.606136   62386 logs.go:276] 0 containers: []
	W0912 23:04:36.606146   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:36.606157   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:36.606171   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:36.643105   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:36.643138   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:36.690911   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:36.690944   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:36.703970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:36.703998   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:36.776158   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:36.776183   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:36.776199   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:35.973095   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:37.974153   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.473010   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.377221   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.877439   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:38.732826   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:40.734523   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:39.362032   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:39.375991   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:39.376090   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:39.412497   62386 cri.go:89] found id: ""
	I0912 23:04:39.412521   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.412528   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:39.412534   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:39.412595   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:39.447783   62386 cri.go:89] found id: ""
	I0912 23:04:39.447807   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.447815   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:39.447820   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:39.447886   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:39.483099   62386 cri.go:89] found id: ""
	I0912 23:04:39.483128   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.483135   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:39.483143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:39.483193   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:39.514898   62386 cri.go:89] found id: ""
	I0912 23:04:39.514932   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.514941   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:39.514952   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:39.515033   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:39.546882   62386 cri.go:89] found id: ""
	I0912 23:04:39.546910   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.546920   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:39.546927   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:39.546990   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:39.577899   62386 cri.go:89] found id: ""
	I0912 23:04:39.577929   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.577939   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:39.577947   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:39.578006   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:39.613419   62386 cri.go:89] found id: ""
	I0912 23:04:39.613446   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.613455   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:39.613461   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:39.613510   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:39.647661   62386 cri.go:89] found id: ""
	I0912 23:04:39.647694   62386 logs.go:276] 0 containers: []
	W0912 23:04:39.647708   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:39.647719   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:39.647733   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:39.696155   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:39.696190   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:39.709312   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:39.709342   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:39.778941   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:39.778968   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:39.778985   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:39.855991   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:39.856028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.395179   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:42.408317   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:42.408449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:42.441443   62386 cri.go:89] found id: ""
	I0912 23:04:42.441472   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.441482   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:42.441489   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:42.441550   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:42.480655   62386 cri.go:89] found id: ""
	I0912 23:04:42.480678   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.480685   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:42.480690   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:42.480734   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:42.513323   62386 cri.go:89] found id: ""
	I0912 23:04:42.513346   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.513353   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:42.513359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:42.513405   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:42.545696   62386 cri.go:89] found id: ""
	I0912 23:04:42.545715   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.545723   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:42.545728   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:42.545775   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:42.584950   62386 cri.go:89] found id: ""
	I0912 23:04:42.584981   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.584992   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:42.584999   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:42.585057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:42.618434   62386 cri.go:89] found id: ""
	I0912 23:04:42.618468   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.618481   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:42.618489   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:42.618557   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:42.665017   62386 cri.go:89] found id: ""
	I0912 23:04:42.665045   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.665056   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:42.665064   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:42.665125   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:42.724365   62386 cri.go:89] found id: ""
	I0912 23:04:42.724389   62386 logs.go:276] 0 containers: []
	W0912 23:04:42.724399   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:42.724409   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:42.724422   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:42.762643   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:42.762671   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:42.815374   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:42.815417   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:42.829340   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:42.829376   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:42.901659   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:42.901690   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:42.901706   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:42.475194   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:44.973902   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:43.376849   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.378144   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:42.734908   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.234296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:45.490536   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:45.504127   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:45.504191   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:45.537415   62386 cri.go:89] found id: ""
	I0912 23:04:45.537447   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.537457   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:45.537464   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:45.537527   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:45.571342   62386 cri.go:89] found id: ""
	I0912 23:04:45.571384   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.571404   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:45.571412   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:45.571471   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:45.608965   62386 cri.go:89] found id: ""
	I0912 23:04:45.608989   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.608997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:45.609002   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:45.609052   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:45.644770   62386 cri.go:89] found id: ""
	I0912 23:04:45.644798   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.644806   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:45.644812   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:45.644859   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:45.678422   62386 cri.go:89] found id: ""
	I0912 23:04:45.678448   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.678456   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:45.678462   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:45.678508   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:45.713808   62386 cri.go:89] found id: ""
	I0912 23:04:45.713831   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.713838   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:45.713844   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:45.713891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:45.747056   62386 cri.go:89] found id: ""
	I0912 23:04:45.747084   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.747092   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:45.747097   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:45.747149   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:45.779787   62386 cri.go:89] found id: ""
	I0912 23:04:45.779809   62386 logs.go:276] 0 containers: []
	W0912 23:04:45.779817   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:45.779824   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:45.779835   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:45.833204   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:45.833239   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:45.846131   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:45.846159   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:45.923415   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:45.923435   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:45.923446   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:46.003597   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:46.003637   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:48.545043   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:48.560025   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:48.560085   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:48.599916   62386 cri.go:89] found id: ""
	I0912 23:04:48.599950   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.599961   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:48.599969   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:48.600027   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:48.648909   62386 cri.go:89] found id: ""
	I0912 23:04:48.648938   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.648946   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:48.648952   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:48.649010   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:48.693019   62386 cri.go:89] found id: ""
	I0912 23:04:48.693046   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.693062   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:48.693081   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:48.693141   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:48.725778   62386 cri.go:89] found id: ""
	I0912 23:04:48.725811   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.725822   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:48.725830   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:48.725891   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:48.760270   62386 cri.go:89] found id: ""
	I0912 23:04:48.760299   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.760311   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:48.760318   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:48.760379   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:48.797235   62386 cri.go:89] found id: ""
	I0912 23:04:48.797264   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.797275   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:48.797282   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:48.797348   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:48.834039   62386 cri.go:89] found id: ""
	I0912 23:04:48.834081   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.834093   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:48.834100   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:48.834162   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:48.866681   62386 cri.go:89] found id: ""
	I0912 23:04:48.866704   62386 logs.go:276] 0 containers: []
	W0912 23:04:48.866712   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:48.866720   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:48.866731   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:48.917954   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:48.917999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:48.931554   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:48.931582   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:49.008086   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:49.008115   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:49.008132   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:49.088699   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:49.088736   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:46.974115   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.475562   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.876644   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:49.877976   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:47.733587   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:50.232852   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:51.628564   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:51.643343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:51.643445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:51.680788   62386 cri.go:89] found id: ""
	I0912 23:04:51.680811   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.680818   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:51.680824   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:51.680873   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:51.719793   62386 cri.go:89] found id: ""
	I0912 23:04:51.719822   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.719835   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:51.719843   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:51.719909   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:51.756766   62386 cri.go:89] found id: ""
	I0912 23:04:51.756795   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.756802   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:51.756808   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:51.756857   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:51.797758   62386 cri.go:89] found id: ""
	I0912 23:04:51.797781   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.797789   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:51.797794   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:51.797844   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:51.830790   62386 cri.go:89] found id: ""
	I0912 23:04:51.830820   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.830830   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:51.830837   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:51.830899   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:51.866782   62386 cri.go:89] found id: ""
	I0912 23:04:51.866806   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.866813   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:51.866819   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:51.866874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:51.902223   62386 cri.go:89] found id: ""
	I0912 23:04:51.902248   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.902276   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:51.902284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:51.902345   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:51.937029   62386 cri.go:89] found id: ""
	I0912 23:04:51.937057   62386 logs.go:276] 0 containers: []
	W0912 23:04:51.937064   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:51.937073   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:51.937084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:51.987691   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:51.987727   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:52.001042   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:52.001067   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:52.076285   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:52.076305   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:52.076316   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:52.156087   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:52.156127   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:51.973991   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:53.974657   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.377379   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:56.878413   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:52.734348   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:55.233890   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:54.692355   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:54.705180   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:54.705258   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:54.736125   62386 cri.go:89] found id: ""
	I0912 23:04:54.736150   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.736158   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:54.736164   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:54.736216   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:54.768743   62386 cri.go:89] found id: ""
	I0912 23:04:54.768769   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.768776   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:54.768781   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:54.768827   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:54.802867   62386 cri.go:89] found id: ""
	I0912 23:04:54.802894   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.802902   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:54.802908   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:54.802959   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:54.836774   62386 cri.go:89] found id: ""
	I0912 23:04:54.836800   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.836808   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:54.836813   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:54.836870   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:54.870694   62386 cri.go:89] found id: ""
	I0912 23:04:54.870716   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.870724   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:54.870730   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:54.870785   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:54.903969   62386 cri.go:89] found id: ""
	I0912 23:04:54.904002   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.904012   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:54.904020   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:54.904070   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:54.937720   62386 cri.go:89] found id: ""
	I0912 23:04:54.937744   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.937751   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:54.937756   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:54.937802   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:54.971370   62386 cri.go:89] found id: ""
	I0912 23:04:54.971397   62386 logs.go:276] 0 containers: []
	W0912 23:04:54.971413   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:54.971427   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:54.971441   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:55.021066   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:55.021101   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:55.034026   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:55.034056   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:55.116939   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:55.116966   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:55.116983   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:55.196410   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:55.196445   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:57.733985   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:04:57.747006   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:04:57.747068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:04:57.784442   62386 cri.go:89] found id: ""
	I0912 23:04:57.784473   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.784486   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:04:57.784500   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:04:57.784571   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:04:57.818314   62386 cri.go:89] found id: ""
	I0912 23:04:57.818341   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.818352   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:04:57.818359   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:04:57.818420   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:04:57.852881   62386 cri.go:89] found id: ""
	I0912 23:04:57.852914   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.852925   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:04:57.852932   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:04:57.852993   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:04:57.894454   62386 cri.go:89] found id: ""
	I0912 23:04:57.894479   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.894487   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:04:57.894493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:04:57.894540   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:04:57.930013   62386 cri.go:89] found id: ""
	I0912 23:04:57.930041   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.930051   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:04:57.930059   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:04:57.930120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:04:57.970535   62386 cri.go:89] found id: ""
	I0912 23:04:57.970697   62386 logs.go:276] 0 containers: []
	W0912 23:04:57.970751   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:04:57.970763   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:04:57.970829   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:04:58.008102   62386 cri.go:89] found id: ""
	I0912 23:04:58.008132   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.008145   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:04:58.008151   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:04:58.008232   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:04:58.043507   62386 cri.go:89] found id: ""
	I0912 23:04:58.043541   62386 logs.go:276] 0 containers: []
	W0912 23:04:58.043552   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:04:58.043563   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:04:58.043577   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:04:58.127231   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:04:58.127291   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:04:58.164444   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:04:58.164476   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:04:58.212622   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:04:58.212658   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:04:58.227517   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:04:58.227546   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:04:58.291876   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:04:56.474801   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:58.973083   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:59.378702   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:01.876871   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:04:57.735810   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.234854   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:00.792084   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:00.804976   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:00.805046   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:00.837560   62386 cri.go:89] found id: ""
	I0912 23:05:00.837596   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.837606   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:00.837629   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:00.837692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:00.871503   62386 cri.go:89] found id: ""
	I0912 23:05:00.871526   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.871534   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:00.871539   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:00.871594   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:00.909215   62386 cri.go:89] found id: ""
	I0912 23:05:00.909245   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.909256   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:00.909263   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:00.909337   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:00.947935   62386 cri.go:89] found id: ""
	I0912 23:05:00.947961   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.947972   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:00.947979   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:00.948043   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:00.989659   62386 cri.go:89] found id: ""
	I0912 23:05:00.989694   62386 logs.go:276] 0 containers: []
	W0912 23:05:00.989707   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:00.989717   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:00.989780   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:01.027073   62386 cri.go:89] found id: ""
	I0912 23:05:01.027103   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.027114   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:01.027129   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:01.027187   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:01.063620   62386 cri.go:89] found id: ""
	I0912 23:05:01.063649   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.063672   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:01.063681   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:01.063751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:01.102398   62386 cri.go:89] found id: ""
	I0912 23:05:01.102428   62386 logs.go:276] 0 containers: []
	W0912 23:05:01.102438   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:01.102449   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:01.102463   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:01.115558   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:01.115585   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:01.190303   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:01.190324   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:01.190337   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:01.272564   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:01.272611   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:01.311954   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:01.311981   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:03.864507   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:03.878613   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:03.878713   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:03.911466   62386 cri.go:89] found id: ""
	I0912 23:05:03.911495   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.911504   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:03.911513   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:03.911592   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:03.945150   62386 cri.go:89] found id: ""
	I0912 23:05:03.945175   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.945188   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:03.945196   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:03.945256   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:03.984952   62386 cri.go:89] found id: ""
	I0912 23:05:03.984984   62386 logs.go:276] 0 containers: []
	W0912 23:05:03.984994   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:03.985001   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:03.985067   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:04.030708   62386 cri.go:89] found id: ""
	I0912 23:05:04.030732   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.030740   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:04.030746   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:04.030798   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:04.072189   62386 cri.go:89] found id: ""
	I0912 23:05:04.072213   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.072221   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:04.072227   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:04.072273   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:04.105068   62386 cri.go:89] found id: ""
	I0912 23:05:04.105100   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.105108   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:04.105114   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:04.105175   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:04.139063   62386 cri.go:89] found id: ""
	I0912 23:05:04.139094   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.139102   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:04.139109   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:04.139172   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:04.175559   62386 cri.go:89] found id: ""
	I0912 23:05:04.175589   62386 logs.go:276] 0 containers: []
	W0912 23:05:04.175599   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:04.175610   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:04.175626   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:04.252495   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:04.252541   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:04.292236   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:04.292263   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:00.974816   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:03.473566   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:05.474006   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.377506   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:06.378058   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:02.733379   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.734050   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:07.234892   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:04.347335   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:04.347377   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:04.360641   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:04.360678   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:04.431032   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:06.931904   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:06.946367   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:06.946445   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:06.985760   62386 cri.go:89] found id: ""
	I0912 23:05:06.985788   62386 logs.go:276] 0 containers: []
	W0912 23:05:06.985796   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:06.985802   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:06.985852   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:07.020076   62386 cri.go:89] found id: ""
	I0912 23:05:07.020106   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.020115   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:07.020120   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:07.020165   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:07.056374   62386 cri.go:89] found id: ""
	I0912 23:05:07.056408   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.056417   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:07.056423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:07.056479   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:07.091022   62386 cri.go:89] found id: ""
	I0912 23:05:07.091049   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.091059   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:07.091067   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:07.091133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:07.131604   62386 cri.go:89] found id: ""
	I0912 23:05:07.131631   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.131641   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:07.131648   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:07.131708   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:07.164548   62386 cri.go:89] found id: ""
	I0912 23:05:07.164575   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.164586   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:07.164593   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:07.164655   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:07.199147   62386 cri.go:89] found id: ""
	I0912 23:05:07.199169   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.199176   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:07.199182   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:07.199245   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:07.231727   62386 cri.go:89] found id: ""
	I0912 23:05:07.231762   62386 logs.go:276] 0 containers: []
	W0912 23:05:07.231773   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:07.231788   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:07.231802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:07.285773   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:07.285809   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:07.299926   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:07.299958   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:07.378838   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:07.378862   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:07.378876   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:07.459903   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:07.459939   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:07.475025   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.973692   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:08.877117   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.377274   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.732632   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:11.734119   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:09.999598   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:10.012258   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:10.012328   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:10.047975   62386 cri.go:89] found id: ""
	I0912 23:05:10.048002   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.048011   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:10.048018   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:10.048074   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:10.081827   62386 cri.go:89] found id: ""
	I0912 23:05:10.081856   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.081866   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:10.081872   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:10.081942   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:10.115594   62386 cri.go:89] found id: ""
	I0912 23:05:10.115625   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.115635   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:10.115642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:10.115692   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:10.147412   62386 cri.go:89] found id: ""
	I0912 23:05:10.147442   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.147452   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:10.147460   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:10.147516   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:10.181118   62386 cri.go:89] found id: ""
	I0912 23:05:10.181147   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.181157   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:10.181164   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:10.181228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:10.214240   62386 cri.go:89] found id: ""
	I0912 23:05:10.214267   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.214277   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:10.214284   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:10.214352   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:10.248497   62386 cri.go:89] found id: ""
	I0912 23:05:10.248522   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.248530   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:10.248543   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:10.248610   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:10.280864   62386 cri.go:89] found id: ""
	I0912 23:05:10.280892   62386 logs.go:276] 0 containers: []
	W0912 23:05:10.280902   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:10.280913   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:10.280927   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:10.318517   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:10.318542   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:10.370087   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:10.370123   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:10.385213   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:10.385247   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:10.448226   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:10.448246   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:10.448257   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.027828   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:13.040546   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:13.040620   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:13.073501   62386 cri.go:89] found id: ""
	I0912 23:05:13.073525   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.073533   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:13.073538   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:13.073584   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:13.105790   62386 cri.go:89] found id: ""
	I0912 23:05:13.105819   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.105830   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:13.105836   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:13.105898   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:13.139307   62386 cri.go:89] found id: ""
	I0912 23:05:13.139331   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.139338   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:13.139344   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:13.139403   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:13.171019   62386 cri.go:89] found id: ""
	I0912 23:05:13.171044   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.171053   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:13.171060   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:13.171119   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:13.202372   62386 cri.go:89] found id: ""
	I0912 23:05:13.202412   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.202423   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:13.202431   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:13.202481   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:13.234046   62386 cri.go:89] found id: ""
	I0912 23:05:13.234069   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.234076   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:13.234083   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:13.234138   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:13.265577   62386 cri.go:89] found id: ""
	I0912 23:05:13.265604   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.265632   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:13.265641   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:13.265696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:13.303462   62386 cri.go:89] found id: ""
	I0912 23:05:13.303489   62386 logs.go:276] 0 containers: []
	W0912 23:05:13.303499   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:13.303521   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:13.303536   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:13.378844   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:13.378867   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:13.378883   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:13.464768   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:13.464806   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:13.502736   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:13.502764   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:13.553473   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:13.553503   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:12.473027   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.973842   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:13.876334   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:15.877134   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:14.234722   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.734222   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:16.067463   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:16.081169   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:16.081269   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:16.115663   62386 cri.go:89] found id: ""
	I0912 23:05:16.115688   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.115696   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:16.115705   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:16.115761   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:16.153429   62386 cri.go:89] found id: ""
	I0912 23:05:16.153460   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.153469   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:16.153476   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:16.153535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:16.187935   62386 cri.go:89] found id: ""
	I0912 23:05:16.187957   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.187965   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:16.187971   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:16.188029   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:16.221249   62386 cri.go:89] found id: ""
	I0912 23:05:16.221273   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.221281   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:16.221287   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:16.221336   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:16.256441   62386 cri.go:89] found id: ""
	I0912 23:05:16.256466   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.256474   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:16.256479   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:16.256546   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:16.290930   62386 cri.go:89] found id: ""
	I0912 23:05:16.290963   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.290976   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:16.290985   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:16.291039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:16.326665   62386 cri.go:89] found id: ""
	I0912 23:05:16.326689   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.326697   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:16.326702   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:16.326749   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:16.365418   62386 cri.go:89] found id: ""
	I0912 23:05:16.365441   62386 logs.go:276] 0 containers: []
	W0912 23:05:16.365448   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:16.365458   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:16.365469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:16.420003   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:16.420039   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:16.434561   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:16.434595   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:16.505201   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:16.505224   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:16.505295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:16.584877   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:16.584914   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:19.121479   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:19.134519   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:19.134586   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:19.170401   62386 cri.go:89] found id: ""
	I0912 23:05:19.170433   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.170444   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:19.170455   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:19.170530   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:19.204750   62386 cri.go:89] found id: ""
	I0912 23:05:19.204779   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.204790   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:19.204797   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:19.204862   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:19.243938   62386 cri.go:89] found id: ""
	I0912 23:05:19.243966   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.243975   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:19.243983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:19.244041   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:19.284424   62386 cri.go:89] found id: ""
	I0912 23:05:19.284453   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.284463   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:19.284469   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:19.284535   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:19.318962   62386 cri.go:89] found id: ""
	I0912 23:05:19.318990   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.319000   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:19.319011   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:19.319068   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:17.474175   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.474829   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:18.376670   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:20.876863   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.234144   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:21.734549   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:19.356456   62386 cri.go:89] found id: ""
	I0912 23:05:19.356487   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.356498   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:19.356505   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:19.356587   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:19.390344   62386 cri.go:89] found id: ""
	I0912 23:05:19.390369   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.390377   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:19.390382   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:19.390429   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:19.425481   62386 cri.go:89] found id: ""
	I0912 23:05:19.425507   62386 logs.go:276] 0 containers: []
	W0912 23:05:19.425528   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:19.425536   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:19.425553   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:19.482051   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:19.482081   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:19.495732   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:19.495758   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:19.565385   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:19.565411   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:19.565428   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:19.640053   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:19.640084   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:22.179292   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:22.191905   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:22.191979   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:22.231402   62386 cri.go:89] found id: ""
	I0912 23:05:22.231429   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.231439   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:22.231446   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:22.231501   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:22.265310   62386 cri.go:89] found id: ""
	I0912 23:05:22.265343   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.265351   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:22.265356   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:22.265425   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:22.297487   62386 cri.go:89] found id: ""
	I0912 23:05:22.297516   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.297532   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:22.297540   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:22.297598   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:22.335344   62386 cri.go:89] found id: ""
	I0912 23:05:22.335374   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.335384   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:22.335391   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:22.335449   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:22.376379   62386 cri.go:89] found id: ""
	I0912 23:05:22.376404   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.376413   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:22.376421   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:22.376484   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:22.416121   62386 cri.go:89] found id: ""
	I0912 23:05:22.416147   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.416154   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:22.416160   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:22.416217   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:22.475037   62386 cri.go:89] found id: ""
	I0912 23:05:22.475114   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.475127   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:22.475143   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:22.475207   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:22.509756   62386 cri.go:89] found id: ""
	I0912 23:05:22.509784   62386 logs.go:276] 0 containers: []
	W0912 23:05:22.509794   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:22.509804   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:22.509823   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:22.559071   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:22.559112   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:22.571951   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:22.571980   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:22.643017   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:22.643034   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:22.643045   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:22.728074   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:22.728113   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:21.475126   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:23.975217   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:22.876979   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.877525   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.879248   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:24.235855   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:26.734384   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:25.268293   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:25.281825   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:25.281906   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:25.315282   62386 cri.go:89] found id: ""
	I0912 23:05:25.315318   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.315328   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:25.315336   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:25.315385   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:25.348647   62386 cri.go:89] found id: ""
	I0912 23:05:25.348679   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.348690   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:25.348697   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:25.348758   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:25.382266   62386 cri.go:89] found id: ""
	I0912 23:05:25.382294   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.382304   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:25.382311   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:25.382378   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:25.420016   62386 cri.go:89] found id: ""
	I0912 23:05:25.420044   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.420056   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:25.420063   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:25.420126   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:25.456435   62386 cri.go:89] found id: ""
	I0912 23:05:25.456457   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.456465   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:25.456470   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:25.456539   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:25.491658   62386 cri.go:89] found id: ""
	I0912 23:05:25.491715   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.491729   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:25.491737   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:25.491790   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:25.526948   62386 cri.go:89] found id: ""
	I0912 23:05:25.526980   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.526991   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:25.526998   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:25.527064   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:25.560291   62386 cri.go:89] found id: ""
	I0912 23:05:25.560323   62386 logs.go:276] 0 containers: []
	W0912 23:05:25.560345   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:25.560357   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:25.560372   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:25.612232   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:25.612276   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:25.626991   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:25.627028   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:25.695005   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:25.695038   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:25.695055   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:25.784310   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:25.784345   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.331410   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:28.343903   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:28.343967   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:28.380946   62386 cri.go:89] found id: ""
	I0912 23:05:28.380973   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.380979   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:28.380985   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:28.381039   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:28.415013   62386 cri.go:89] found id: ""
	I0912 23:05:28.415042   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.415052   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:28.415059   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:28.415120   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:28.451060   62386 cri.go:89] found id: ""
	I0912 23:05:28.451093   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.451105   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:28.451113   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:28.451171   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:28.485664   62386 cri.go:89] found id: ""
	I0912 23:05:28.485693   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.485704   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:28.485712   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:28.485774   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:28.520307   62386 cri.go:89] found id: ""
	I0912 23:05:28.520338   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.520349   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:28.520359   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:28.520417   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:28.553111   62386 cri.go:89] found id: ""
	I0912 23:05:28.553139   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.553147   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:28.553152   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:28.553208   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:28.586778   62386 cri.go:89] found id: ""
	I0912 23:05:28.586808   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.586816   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:28.586822   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:28.586874   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:28.620760   62386 cri.go:89] found id: ""
	I0912 23:05:28.620784   62386 logs.go:276] 0 containers: []
	W0912 23:05:28.620791   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:28.620799   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:28.620811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:28.701431   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:28.701481   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:28.741398   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:28.741431   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:28.793431   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:28.793469   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:28.809572   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:28.809600   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:28.894914   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:26.473222   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:28.474342   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.377090   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.378238   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:29.234479   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.734265   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:31.395663   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:31.408079   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:31.408160   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:31.445176   62386 cri.go:89] found id: ""
	I0912 23:05:31.445207   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.445215   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:31.445221   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:31.445280   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:31.483446   62386 cri.go:89] found id: ""
	I0912 23:05:31.483472   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.483480   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:31.483486   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:31.483544   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:31.519958   62386 cri.go:89] found id: ""
	I0912 23:05:31.519989   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.519997   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:31.520003   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:31.520057   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:31.556719   62386 cri.go:89] found id: ""
	I0912 23:05:31.556748   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.556759   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:31.556771   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:31.556832   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:31.596465   62386 cri.go:89] found id: ""
	I0912 23:05:31.596491   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.596502   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:31.596508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:31.596572   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:31.634562   62386 cri.go:89] found id: ""
	I0912 23:05:31.634592   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.634601   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:31.634607   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:31.634665   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:31.669305   62386 cri.go:89] found id: ""
	I0912 23:05:31.669337   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.669348   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:31.669356   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:31.669422   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:31.703081   62386 cri.go:89] found id: ""
	I0912 23:05:31.703111   62386 logs.go:276] 0 containers: []
	W0912 23:05:31.703121   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:31.703133   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:31.703148   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:31.742613   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:31.742635   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:31.797827   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:31.797872   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:31.811970   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:31.811999   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:31.888872   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:31.888896   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:31.888910   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:30.974024   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:32.974606   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.473280   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.876698   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:35.877749   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:33.734760   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:36.233363   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:34.469724   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:34.483511   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:34.483579   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:34.516198   62386 cri.go:89] found id: ""
	I0912 23:05:34.516222   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.516229   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:34.516235   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:34.516301   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:34.550166   62386 cri.go:89] found id: ""
	I0912 23:05:34.550199   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.550210   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:34.550218   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:34.550274   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:34.593361   62386 cri.go:89] found id: ""
	I0912 23:05:34.593401   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.593412   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:34.593420   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:34.593483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:34.639593   62386 cri.go:89] found id: ""
	I0912 23:05:34.639633   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.639653   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:34.639661   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:34.639729   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:34.690382   62386 cri.go:89] found id: ""
	I0912 23:05:34.690410   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.690417   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:34.690423   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:34.690483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:34.727943   62386 cri.go:89] found id: ""
	I0912 23:05:34.727970   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.727978   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:34.727983   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:34.728051   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:34.765558   62386 cri.go:89] found id: ""
	I0912 23:05:34.765586   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.765593   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:34.765598   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:34.765663   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:34.801455   62386 cri.go:89] found id: ""
	I0912 23:05:34.801484   62386 logs.go:276] 0 containers: []
	W0912 23:05:34.801492   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:34.801500   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:34.801511   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:34.880260   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:34.880295   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:34.922827   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:34.922855   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:34.974609   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:34.974639   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:34.987945   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:34.987972   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:35.062008   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.562965   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:37.575149   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:37.575226   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:37.611980   62386 cri.go:89] found id: ""
	I0912 23:05:37.612014   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.612026   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:37.612035   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:37.612102   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:37.645664   62386 cri.go:89] found id: ""
	I0912 23:05:37.645693   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.645703   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:37.645711   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:37.645771   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:37.685333   62386 cri.go:89] found id: ""
	I0912 23:05:37.685356   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.685364   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:37.685369   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:37.685428   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:37.719017   62386 cri.go:89] found id: ""
	I0912 23:05:37.719052   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.719063   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:37.719071   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:37.719133   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:37.751534   62386 cri.go:89] found id: ""
	I0912 23:05:37.751569   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.751579   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:37.751588   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:37.751647   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:37.785583   62386 cri.go:89] found id: ""
	I0912 23:05:37.785608   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.785635   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:37.785642   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:37.785702   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:37.818396   62386 cri.go:89] found id: ""
	I0912 23:05:37.818428   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.818438   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:37.818445   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:37.818504   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:37.853767   62386 cri.go:89] found id: ""
	I0912 23:05:37.853798   62386 logs.go:276] 0 containers: []
	W0912 23:05:37.853806   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:37.853814   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:37.853830   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:37.926273   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:37.926300   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:37.926315   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:38.014243   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:38.014279   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:38.052431   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:38.052455   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:38.103154   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:38.103188   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:37.972774   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:39.973976   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:37.878631   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.378366   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:38.234131   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.733727   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:40.617399   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:40.629412   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:40.629483   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:40.666668   62386 cri.go:89] found id: ""
	I0912 23:05:40.666693   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.666700   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:40.666706   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:40.666751   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:40.697548   62386 cri.go:89] found id: ""
	I0912 23:05:40.697573   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.697580   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:40.697585   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:40.697659   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:40.729426   62386 cri.go:89] found id: ""
	I0912 23:05:40.729450   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.729458   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:40.729468   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:40.729517   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:40.766769   62386 cri.go:89] found id: ""
	I0912 23:05:40.766793   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.766800   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:40.766804   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:40.766860   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:40.801523   62386 cri.go:89] found id: ""
	I0912 23:05:40.801550   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.801557   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:40.801563   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:40.801641   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:40.839943   62386 cri.go:89] found id: ""
	I0912 23:05:40.839975   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.839987   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:40.839993   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:40.840055   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:40.873231   62386 cri.go:89] found id: ""
	I0912 23:05:40.873260   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.873268   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:40.873276   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:40.873325   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:40.920007   62386 cri.go:89] found id: ""
	I0912 23:05:40.920040   62386 logs.go:276] 0 containers: []
	W0912 23:05:40.920049   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:40.920057   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:40.920069   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:40.972684   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:40.972716   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:40.986768   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:40.986802   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:41.052454   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:41.052479   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:41.052494   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:41.133810   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:41.133850   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:43.672432   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:43.684493   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:43.684552   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:43.718130   62386 cri.go:89] found id: ""
	I0912 23:05:43.718155   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.718163   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:43.718169   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:43.718228   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:43.751866   62386 cri.go:89] found id: ""
	I0912 23:05:43.751895   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.751905   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:43.751912   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:43.751974   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:43.785544   62386 cri.go:89] found id: ""
	I0912 23:05:43.785571   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.785583   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:43.785589   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:43.785664   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:43.820588   62386 cri.go:89] found id: ""
	I0912 23:05:43.820616   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.820624   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:43.820630   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:43.820677   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:43.853567   62386 cri.go:89] found id: ""
	I0912 23:05:43.853600   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.853631   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:43.853640   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:43.853696   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:43.888646   62386 cri.go:89] found id: ""
	I0912 23:05:43.888671   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.888679   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:43.888684   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:43.888731   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:43.922563   62386 cri.go:89] found id: ""
	I0912 23:05:43.922596   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.922607   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:43.922614   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:43.922667   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:43.956786   62386 cri.go:89] found id: ""
	I0912 23:05:43.956817   62386 logs.go:276] 0 containers: []
	W0912 23:05:43.956825   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:43.956834   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:43.956845   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:44.035351   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:44.035388   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:44.073301   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:44.073338   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:44.124754   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:44.124788   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:44.138899   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:44.138924   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:44.208682   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:42.474139   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.974214   61904 pod_ready.go:103] pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:42.876306   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:44.877310   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.878568   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:43.233358   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:45.233823   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:47.234529   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:46.709822   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:46.722782   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.722905   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.767512   62386 cri.go:89] found id: ""
	I0912 23:05:46.767537   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.767545   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:05:46.767551   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.767603   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.812486   62386 cri.go:89] found id: ""
	I0912 23:05:46.812523   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.812533   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:05:46.812541   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.812602   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.855093   62386 cri.go:89] found id: ""
	I0912 23:05:46.855125   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.855134   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:05:46.855141   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.855214   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.899067   62386 cri.go:89] found id: ""
	I0912 23:05:46.899101   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.899113   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:05:46.899121   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.899184   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.939775   62386 cri.go:89] found id: ""
	I0912 23:05:46.939802   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.939810   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:05:46.939816   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.939863   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.975288   62386 cri.go:89] found id: ""
	I0912 23:05:46.975319   62386 logs.go:276] 0 containers: []
	W0912 23:05:46.975329   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:05:46.975343   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.975426   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:47.012985   62386 cri.go:89] found id: ""
	I0912 23:05:47.013018   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.013030   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:47.013038   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:05:47.013104   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:05:47.052124   62386 cri.go:89] found id: ""
	I0912 23:05:47.052154   62386 logs.go:276] 0 containers: []
	W0912 23:05:47.052164   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:05:47.052175   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.052189   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.108769   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.108811   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.124503   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.124530   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:05:47.195340   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:05:47.195362   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.195380   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.297155   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:05:47.297204   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:46.473252   61904 pod_ready.go:82] duration metric: took 4m0.006064954s for pod "metrics-server-6867b74b74-kvpqz" in "kube-system" namespace to be "Ready" ...
	E0912 23:05:46.473275   61904 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:05:46.473282   61904 pod_ready.go:39] duration metric: took 4m4.576962836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:05:46.473309   61904 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:05:46.473336   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:46.473378   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:46.513731   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:46.513759   61904 cri.go:89] found id: ""
	I0912 23:05:46.513768   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:46.513827   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.519031   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:46.519099   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:46.560521   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:46.560548   61904 cri.go:89] found id: ""
	I0912 23:05:46.560560   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:46.560623   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.564340   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:46.564399   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:46.598825   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.598848   61904 cri.go:89] found id: ""
	I0912 23:05:46.598857   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:46.598909   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.602944   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:46.603005   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:46.640315   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:46.640335   61904 cri.go:89] found id: ""
	I0912 23:05:46.640343   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:46.640395   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.644061   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:46.644119   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:46.681114   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:46.681143   61904 cri.go:89] found id: ""
	I0912 23:05:46.681153   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:46.681214   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.685151   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:46.685223   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:46.723129   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.723150   61904 cri.go:89] found id: ""
	I0912 23:05:46.723160   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:46.723208   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.727959   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:46.728021   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:46.770194   61904 cri.go:89] found id: ""
	I0912 23:05:46.770219   61904 logs.go:276] 0 containers: []
	W0912 23:05:46.770229   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:46.770236   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:46.770296   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:46.819004   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.819031   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:46.819037   61904 cri.go:89] found id: ""
	I0912 23:05:46.819045   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:46.819105   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.824442   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:46.829336   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:46.829367   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:46.876170   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:46.876205   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:46.944290   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:46.944336   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:46.991117   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:46.991154   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:47.041776   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:47.041805   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:47.125682   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:47.125720   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:47.141463   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:47.141505   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:47.193432   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:47.193477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:47.238975   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:47.239000   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:47.282299   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:47.282340   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:47.322575   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:47.322605   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:47.783079   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:47.783116   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:47.909961   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:47.909994   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.466816   61904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:50.483164   61904 api_server.go:72] duration metric: took 4m15.815867821s to wait for apiserver process to appear ...
	I0912 23:05:50.483189   61904 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:05:50.483219   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:50.483265   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:50.521905   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.521932   61904 cri.go:89] found id: ""
	I0912 23:05:50.521942   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:50.522001   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.526289   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:50.526355   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:50.565340   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:50.565367   61904 cri.go:89] found id: ""
	I0912 23:05:50.565376   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:50.565434   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.569231   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:50.569310   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:50.607696   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:50.607721   61904 cri.go:89] found id: ""
	I0912 23:05:50.607729   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:50.607771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.611696   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:50.611753   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:50.647554   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:50.647580   61904 cri.go:89] found id: ""
	I0912 23:05:50.647590   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:50.647649   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.652065   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:50.652128   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:50.691276   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:50.691300   61904 cri.go:89] found id: ""
	I0912 23:05:50.691307   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:50.691348   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.696475   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:50.696537   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:50.732677   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:50.732704   61904 cri.go:89] found id: ""
	I0912 23:05:50.732714   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:50.732771   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.737450   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:50.737503   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:50.770732   61904 cri.go:89] found id: ""
	I0912 23:05:50.770762   61904 logs.go:276] 0 containers: []
	W0912 23:05:50.770773   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:50.770781   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:50.770830   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:49.376457   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.378141   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.732832   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:51.734674   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:49.841253   62386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:05:49.854221   62386 kubeadm.go:597] duration metric: took 4m1.913192999s to restartPrimaryControlPlane
	W0912 23:05:49.854297   62386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:05:49.854335   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:05:51.221029   62386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.366663525s)
	I0912 23:05:51.221129   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:51.238493   62386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:05:51.250943   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:05:51.264325   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:05:51.264348   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:05:51.264393   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:05:51.273514   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:05:51.273570   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:05:51.282967   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:05:51.291978   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:05:51.292037   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:05:51.301520   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.310439   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:05:51.310530   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:05:51.319803   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:05:51.328881   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:05:51.328956   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:05:51.337946   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:05:51.565945   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:05:50.804311   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:50.804337   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.804342   61904 cri.go:89] found id: ""
	I0912 23:05:50.804349   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:50.804396   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.808236   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:50.812298   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:50.812316   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:50.846429   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:50.846457   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:50.917118   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:50.917152   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:50.931954   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:50.931992   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:50.979688   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:50.979727   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:51.026392   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:51.026421   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:51.063302   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:51.063330   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:51.096593   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:51.096626   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:51.198824   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:51.198856   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:51.244247   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:51.244271   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:51.284694   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:51.284717   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:51.340541   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:51.340569   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:51.754823   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:51.754864   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.294987   61904 api_server.go:253] Checking apiserver healthz at https://192.168.72.96:8443/healthz ...
	I0912 23:05:54.300314   61904 api_server.go:279] https://192.168.72.96:8443/healthz returned 200:
	ok
	I0912 23:05:54.301385   61904 api_server.go:141] control plane version: v1.31.1
	I0912 23:05:54.301413   61904 api_server.go:131] duration metric: took 3.818216539s to wait for apiserver health ...
	I0912 23:05:54.301421   61904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:05:54.301441   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:05:54.301491   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:05:54.342980   61904 cri.go:89] found id: "115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:54.343001   61904 cri.go:89] found id: ""
	I0912 23:05:54.343010   61904 logs.go:276] 1 containers: [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09]
	I0912 23:05:54.343063   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.347269   61904 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:05:54.347352   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:05:54.386656   61904 cri.go:89] found id: "e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:54.386674   61904 cri.go:89] found id: ""
	I0912 23:05:54.386681   61904 logs.go:276] 1 containers: [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f]
	I0912 23:05:54.386755   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.390707   61904 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:05:54.390769   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:05:54.433746   61904 cri.go:89] found id: "7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:54.433773   61904 cri.go:89] found id: ""
	I0912 23:05:54.433782   61904 logs.go:276] 1 containers: [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168]
	I0912 23:05:54.433844   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.438175   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:05:54.438231   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:05:54.475067   61904 cri.go:89] found id: "dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:54.475095   61904 cri.go:89] found id: ""
	I0912 23:05:54.475105   61904 logs.go:276] 1 containers: [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880]
	I0912 23:05:54.475178   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.479308   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:05:54.479367   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:05:54.524489   61904 cri.go:89] found id: "0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:54.524513   61904 cri.go:89] found id: ""
	I0912 23:05:54.524521   61904 logs.go:276] 1 containers: [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64]
	I0912 23:05:54.524583   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.528854   61904 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:05:54.528925   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:05:54.569776   61904 cri.go:89] found id: "54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.569801   61904 cri.go:89] found id: ""
	I0912 23:05:54.569811   61904 logs.go:276] 1 containers: [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31]
	I0912 23:05:54.569865   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.574000   61904 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:05:54.574070   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:05:54.613184   61904 cri.go:89] found id: ""
	I0912 23:05:54.613212   61904 logs.go:276] 0 containers: []
	W0912 23:05:54.613222   61904 logs.go:278] No container was found matching "kindnet"
	I0912 23:05:54.613229   61904 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:05:54.613292   61904 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:05:54.648971   61904 cri.go:89] found id: "0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:54.648992   61904 cri.go:89] found id: "fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:54.648997   61904 cri.go:89] found id: ""
	I0912 23:05:54.649006   61904 logs.go:276] 2 containers: [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f]
	I0912 23:05:54.649062   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.653671   61904 ssh_runner.go:195] Run: which crictl
	I0912 23:05:54.657535   61904 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:05:54.657557   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:05:54.781055   61904 logs.go:123] Gathering logs for kube-controller-manager [54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31] ...
	I0912 23:05:54.781094   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54dd60703518d12cf3cb62102bf8bec31d1e6b0f218f4c26391b234d58111e31"
	I0912 23:05:54.832441   61904 logs.go:123] Gathering logs for container status ...
	I0912 23:05:54.832477   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:05:54.887662   61904 logs.go:123] Gathering logs for kubelet ...
	I0912 23:05:54.887695   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:05:54.958381   61904 logs.go:123] Gathering logs for dmesg ...
	I0912 23:05:54.958417   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:05:54.973583   61904 logs.go:123] Gathering logs for coredns [7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168] ...
	I0912 23:05:54.973609   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7841230606dafeab68b501c35a871c84f440d0d9f4cde95a302770671152f168"
	I0912 23:05:55.022192   61904 logs.go:123] Gathering logs for kube-scheduler [dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880] ...
	I0912 23:05:55.022217   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc8c605cca940c298fda4fb0d3a0e55234ab799e5f4d684817c1c33aa6752880"
	I0912 23:05:55.059878   61904 logs.go:123] Gathering logs for kube-proxy [0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64] ...
	I0912 23:05:55.059910   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b058233860f229334437280ee5ccf5d203eaf352bae41a6af7d4602b55a3d64"
	I0912 23:05:55.104371   61904 logs.go:123] Gathering logs for storage-provisioner [0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb] ...
	I0912 23:05:55.104399   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e48efc9ba5a488b4984057a9785fbda6fcefcea047948ac3c83f09afd28efdb"
	I0912 23:05:55.139625   61904 logs.go:123] Gathering logs for storage-provisioner [fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f] ...
	I0912 23:05:55.139656   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb0e5ac691d286cca5fe46e3435ece952c4b9cdc7df3481fec68adf1403689f"
	I0912 23:05:55.172414   61904 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:05:55.172442   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:05:55.528482   61904 logs.go:123] Gathering logs for kube-apiserver [115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09] ...
	I0912 23:05:55.528522   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115e1e7911747aa5930ece5298af89f0209bf07e0336cd598b8901e2a2af2e09"
	I0912 23:05:55.572399   61904 logs.go:123] Gathering logs for etcd [e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f] ...
	I0912 23:05:55.572433   61904 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e099ac110cb9ec18f36a0fbbdb769c4bb0c2642bf081442d3054c280c663730f"
	I0912 23:05:53.876844   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:55.878108   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:54.235375   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:56.733525   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.125405   61904 system_pods.go:59] 8 kube-system pods found
	I0912 23:05:58.125436   61904 system_pods.go:61] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.125441   61904 system_pods.go:61] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.125445   61904 system_pods.go:61] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.125449   61904 system_pods.go:61] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.125452   61904 system_pods.go:61] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.125455   61904 system_pods.go:61] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.125461   61904 system_pods.go:61] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.125465   61904 system_pods.go:61] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.125472   61904 system_pods.go:74] duration metric: took 3.824046737s to wait for pod list to return data ...
	I0912 23:05:58.125478   61904 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:05:58.128039   61904 default_sa.go:45] found service account: "default"
	I0912 23:05:58.128060   61904 default_sa.go:55] duration metric: took 2.576708ms for default service account to be created ...
	I0912 23:05:58.128067   61904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:05:58.132607   61904 system_pods.go:86] 8 kube-system pods found
	I0912 23:05:58.132629   61904 system_pods.go:89] "coredns-7c65d6cfc9-m8t6h" [93c63198-ebd2-4e88-9be8-912425b1eb84] Running
	I0912 23:05:58.132634   61904 system_pods.go:89] "etcd-embed-certs-378112" [cc716756-abda-447a-ad36-bfc89c129bdf] Running
	I0912 23:05:58.132638   61904 system_pods.go:89] "kube-apiserver-embed-certs-378112" [039a7348-41bf-481f-9218-3ea0c2ff1373] Running
	I0912 23:05:58.132642   61904 system_pods.go:89] "kube-controller-manager-embed-certs-378112" [9bcb8af0-6e4b-405a-94a1-5be70d737cfa] Running
	I0912 23:05:58.132647   61904 system_pods.go:89] "kube-proxy-fvbbq" [b172754e-bb5a-40ba-a9be-a7632081defc] Running
	I0912 23:05:58.132652   61904 system_pods.go:89] "kube-scheduler-embed-certs-378112" [f7cb022f-6c15-4c70-916f-39313199effe] Running
	I0912 23:05:58.132661   61904 system_pods.go:89] "metrics-server-6867b74b74-kvpqz" [04e47cfd-bada-4cbd-8792-db4edebfb282] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:05:58.132671   61904 system_pods.go:89] "storage-provisioner" [a1840d2a-8e08-4fa2-9ed5-ac96fb0baf4d] Running
	I0912 23:05:58.132682   61904 system_pods.go:126] duration metric: took 4.609196ms to wait for k8s-apps to be running ...
	I0912 23:05:58.132694   61904 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:05:58.132739   61904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:05:58.149020   61904 system_svc.go:56] duration metric: took 16.317773ms WaitForService to wait for kubelet
	I0912 23:05:58.149048   61904 kubeadm.go:582] duration metric: took 4m23.481755577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:05:58.149073   61904 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:05:58.152519   61904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:05:58.152547   61904 node_conditions.go:123] node cpu capacity is 2
	I0912 23:05:58.152559   61904 node_conditions.go:105] duration metric: took 3.480407ms to run NodePressure ...
	I0912 23:05:58.152570   61904 start.go:241] waiting for startup goroutines ...
	I0912 23:05:58.152576   61904 start.go:246] waiting for cluster config update ...
	I0912 23:05:58.152587   61904 start.go:255] writing updated cluster config ...
	I0912 23:05:58.152833   61904 ssh_runner.go:195] Run: rm -f paused
	I0912 23:05:58.203069   61904 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:05:58.204904   61904 out.go:177] * Done! kubectl is now configured to use "embed-certs-378112" cluster and "default" namespace by default
	I0912 23:05:58.376646   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:00.377105   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:05:58.733992   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:01.233920   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:02.877229   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:04.877926   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:03.733400   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:05.733949   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:07.377308   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:09.877459   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:08.234361   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:10.732480   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.376661   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.877753   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:16.877980   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:12.733231   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:14.734774   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:17.233456   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.376959   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.878279   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:19.234570   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:21.733406   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:24.376731   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:26.377122   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:23.733543   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:25.734296   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.877696   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:31.376778   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:28.232623   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:30.233670   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:32.234123   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:33.377208   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:35.877039   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:34.234158   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:36.234309   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:37.877566   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.376636   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:38.733567   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:40.734256   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.377148   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:44.377925   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:46.877563   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:42.734926   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.233731   61354 pod_ready.go:103] pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:45.727482   61354 pod_ready.go:82] duration metric: took 4m0.000232225s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" ...
	E0912 23:06:45.727510   61354 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-q5vlk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0912 23:06:45.727526   61354 pod_ready.go:39] duration metric: took 4m13.050011701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:06:45.727553   61354 kubeadm.go:597] duration metric: took 4m21.402206535s to restartPrimaryControlPlane
	W0912 23:06:45.727638   61354 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0912 23:06:45.727686   61354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:06:49.376346   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:51.376720   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:53.877426   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:56.377076   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:06:58.876146   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:00.876887   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:02.877032   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:04.877344   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:07.376495   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:09.377212   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.878788   62943 pod_ready.go:103] pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:11.920816   61354 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.193093675s)
	I0912 23:07:11.920900   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:11.939101   61354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 23:07:11.950330   61354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:11.960727   61354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:11.960753   61354 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:11.960802   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0912 23:07:11.970932   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:11.970988   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:11.981111   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0912 23:07:11.990384   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:11.990455   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:12.000218   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.009191   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:12.009266   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:12.019270   61354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0912 23:07:12.028102   61354 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:12.028165   61354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:12.037512   61354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:12.083528   61354 kubeadm.go:310] W0912 23:07:12.055244    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.084358   61354 kubeadm.go:310] W0912 23:07:12.056267    2491 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 23:07:12.190683   61354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:07:12.377757   62943 pod_ready.go:82] duration metric: took 4m0.007392806s for pod "metrics-server-6867b74b74-4v7f5" in "kube-system" namespace to be "Ready" ...
	E0912 23:07:12.377785   62943 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:07:12.377794   62943 pod_ready.go:39] duration metric: took 4m2.807476708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:12.377812   62943 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:12.377843   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:12.377898   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:12.431934   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:12.431964   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.431969   62943 cri.go:89] found id: ""
	I0912 23:07:12.431977   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:12.432043   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.436742   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.440569   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:12.440626   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:12.476994   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:12.477016   62943 cri.go:89] found id: ""
	I0912 23:07:12.477024   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:12.477076   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.481585   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:12.481661   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:12.524772   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:12.524797   62943 cri.go:89] found id: ""
	I0912 23:07:12.524808   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:12.524860   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.529988   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:12.530052   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:12.573298   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:12.573329   62943 cri.go:89] found id: ""
	I0912 23:07:12.573340   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:12.573400   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.579767   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:12.579844   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:12.624696   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.624723   62943 cri.go:89] found id: ""
	I0912 23:07:12.624733   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:12.624790   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.632367   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:12.632430   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:12.667385   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.667411   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:12.667415   62943 cri.go:89] found id: ""
	I0912 23:07:12.667422   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:12.667474   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.671688   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.675901   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:12.675964   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:12.712909   62943 cri.go:89] found id: ""
	I0912 23:07:12.712944   62943 logs.go:276] 0 containers: []
	W0912 23:07:12.712955   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:12.712962   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:12.713023   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:12.755865   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:12.755888   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:12.755894   62943 cri.go:89] found id: ""
	I0912 23:07:12.755903   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:12.755958   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.760095   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:12.763682   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:12.763706   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:12.811915   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:12.811949   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:12.846546   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:12.846582   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:12.904475   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:12.904518   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:12.984863   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:12.984898   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:13.116848   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:13.116879   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:13.165949   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:13.165978   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:13.704372   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:13.704424   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:13.757082   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:13.757123   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:13.802951   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:13.802988   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:13.838952   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:13.838989   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:13.852983   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:13.853015   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:13.898651   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:13.898679   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:13.943800   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:13.943838   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:13.984960   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:13.984996   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.526061   62943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:16.547018   62943 api_server.go:72] duration metric: took 4m14.74025779s to wait for apiserver process to appear ...
	I0912 23:07:16.547046   62943 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:16.547085   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:16.547134   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:16.589088   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:16.589124   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:16.589130   62943 cri.go:89] found id: ""
	I0912 23:07:16.589138   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:16.589199   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.593386   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.597107   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:16.597166   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:16.644456   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:16.644482   62943 cri.go:89] found id: ""
	I0912 23:07:16.644491   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:16.644544   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.648617   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:16.648693   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:16.688003   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:16.688027   62943 cri.go:89] found id: ""
	I0912 23:07:16.688037   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:16.688093   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.692761   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:16.692832   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:16.733490   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:16.733522   62943 cri.go:89] found id: ""
	I0912 23:07:16.733533   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:16.733596   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.738566   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:16.738641   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:16.785654   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:16.785683   62943 cri.go:89] found id: ""
	I0912 23:07:16.785693   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:16.785753   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.791205   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:16.791290   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:16.830707   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:16.830739   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:16.830746   62943 cri.go:89] found id: ""
	I0912 23:07:16.830756   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:16.830819   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.835378   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.840600   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:16.840670   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:20.225940   61354 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 23:07:20.226007   61354 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:20.226107   61354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:20.226261   61354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:20.226412   61354 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 23:07:20.226506   61354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:20.228109   61354 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:20.228211   61354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:20.228297   61354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:20.228412   61354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:20.228493   61354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:20.228621   61354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:20.228699   61354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:20.228788   61354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:20.228875   61354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:20.228987   61354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:20.229123   61354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:20.229177   61354 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:20.229273   61354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:20.229365   61354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:20.229454   61354 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 23:07:20.229533   61354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:20.229644   61354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:20.229723   61354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:20.229833   61354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:20.229922   61354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:20.231172   61354 out.go:235]   - Booting up control plane ...
	I0912 23:07:20.231276   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:20.231371   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:20.231457   61354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:20.231596   61354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:20.231706   61354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:20.231772   61354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:20.231943   61354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 23:07:20.232041   61354 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 23:07:20.232091   61354 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.452461ms
	I0912 23:07:20.232151   61354 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 23:07:20.232202   61354 kubeadm.go:310] [api-check] The API server is healthy after 5.00140085s
	I0912 23:07:20.232302   61354 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 23:07:20.232437   61354 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 23:07:20.232508   61354 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 23:07:20.232685   61354 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-702201 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 23:07:20.232764   61354 kubeadm.go:310] [bootstrap-token] Using token: uufjzd.0ysmpgh1j6e2l8hs
	I0912 23:07:20.234000   61354 out.go:235]   - Configuring RBAC rules ...
	I0912 23:07:20.234123   61354 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 23:07:20.234230   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 23:07:20.234438   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 23:07:20.234584   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 23:07:20.234714   61354 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 23:07:20.234818   61354 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 23:07:20.234946   61354 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 23:07:20.235008   61354 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 23:07:20.235081   61354 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 23:07:20.235089   61354 kubeadm.go:310] 
	I0912 23:07:20.235152   61354 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 23:07:20.235163   61354 kubeadm.go:310] 
	I0912 23:07:20.235231   61354 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 23:07:20.235237   61354 kubeadm.go:310] 
	I0912 23:07:20.235258   61354 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 23:07:20.235346   61354 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 23:07:20.235424   61354 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 23:07:20.235433   61354 kubeadm.go:310] 
	I0912 23:07:20.235512   61354 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 23:07:20.235523   61354 kubeadm.go:310] 
	I0912 23:07:20.235587   61354 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 23:07:20.235596   61354 kubeadm.go:310] 
	I0912 23:07:20.235683   61354 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 23:07:20.235781   61354 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 23:07:20.235848   61354 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 23:07:20.235855   61354 kubeadm.go:310] 
	I0912 23:07:20.235924   61354 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 23:07:20.235988   61354 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 23:07:20.235994   61354 kubeadm.go:310] 
	I0912 23:07:20.236075   61354 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236168   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f \
	I0912 23:07:20.236188   61354 kubeadm.go:310] 	--control-plane 
	I0912 23:07:20.236195   61354 kubeadm.go:310] 
	I0912 23:07:20.236267   61354 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 23:07:20.236274   61354 kubeadm.go:310] 
	I0912 23:07:20.236345   61354 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uufjzd.0ysmpgh1j6e2l8hs \
	I0912 23:07:20.236447   61354 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9285e6e7599a58febe9d174fa57ffa69a9b4bf818d01b703e61fc8c784ff29f 
	I0912 23:07:20.236458   61354 cni.go:84] Creating CNI manager for ""
	I0912 23:07:20.236465   61354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 23:07:20.237667   61354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 23:07:16.892881   62943 cri.go:89] found id: ""
	I0912 23:07:16.892908   62943 logs.go:276] 0 containers: []
	W0912 23:07:16.892918   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:16.892926   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:16.892986   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:16.938816   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:16.938856   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:16.938861   62943 cri.go:89] found id: ""
	I0912 23:07:16.938868   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:16.938924   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.944985   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:16.950257   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:16.950290   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:17.071942   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:17.071999   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:17.120765   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:17.120797   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:17.636341   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:17.636387   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:17.714095   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:17.714133   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:17.765583   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:17.765637   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:17.809278   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:17.809309   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:17.845960   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:17.845984   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:17.860171   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:17.860201   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:17.926666   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:17.926711   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:17.976830   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:17.976862   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:18.029551   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:18.029590   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:18.089974   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:18.090007   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:18.151149   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:18.151175   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:18.191616   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:18.191645   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.735505   62943 api_server.go:253] Checking apiserver healthz at https://192.168.50.253:8443/healthz ...
	I0912 23:07:20.740261   62943 api_server.go:279] https://192.168.50.253:8443/healthz returned 200:
	ok
	I0912 23:07:20.741163   62943 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:20.741184   62943 api_server.go:131] duration metric: took 4.194131154s to wait for apiserver health ...
	I0912 23:07:20.741193   62943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:20.741219   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:07:20.741275   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:07:20.778572   62943 cri.go:89] found id: "3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:20.778596   62943 cri.go:89] found id: "00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:20.778600   62943 cri.go:89] found id: ""
	I0912 23:07:20.778613   62943 logs.go:276] 2 containers: [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3]
	I0912 23:07:20.778656   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.782575   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.786177   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:07:20.786235   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:07:20.822848   62943 cri.go:89] found id: "35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:20.822869   62943 cri.go:89] found id: ""
	I0912 23:07:20.822877   62943 logs.go:276] 1 containers: [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29]
	I0912 23:07:20.822930   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.827081   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:07:20.827150   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:07:20.862327   62943 cri.go:89] found id: "e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:20.862358   62943 cri.go:89] found id: ""
	I0912 23:07:20.862369   62943 logs.go:276] 1 containers: [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189]
	I0912 23:07:20.862437   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.866899   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:07:20.866974   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:07:20.903397   62943 cri.go:89] found id: "3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:20.903423   62943 cri.go:89] found id: ""
	I0912 23:07:20.903433   62943 logs.go:276] 1 containers: [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec]
	I0912 23:07:20.903497   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.908223   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:07:20.908322   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:07:20.961886   62943 cri.go:89] found id: "4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:20.961912   62943 cri.go:89] found id: ""
	I0912 23:07:20.961923   62943 logs.go:276] 1 containers: [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37]
	I0912 23:07:20.961983   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:20.965943   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:07:20.966005   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:07:21.003792   62943 cri.go:89] found id: "eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.003818   62943 cri.go:89] found id: "635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:21.003825   62943 cri.go:89] found id: ""
	I0912 23:07:21.003835   62943 logs.go:276] 2 containers: [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7]
	I0912 23:07:21.003892   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.008651   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.012614   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:07:21.012675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:07:21.051013   62943 cri.go:89] found id: ""
	I0912 23:07:21.051044   62943 logs.go:276] 0 containers: []
	W0912 23:07:21.051055   62943 logs.go:278] No container was found matching "kindnet"
	I0912 23:07:21.051063   62943 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:07:21.051121   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:07:21.091038   62943 cri.go:89] found id: "3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.091060   62943 cri.go:89] found id: "d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.091065   62943 cri.go:89] found id: ""
	I0912 23:07:21.091072   62943 logs.go:276] 2 containers: [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a]
	I0912 23:07:21.091126   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.095923   62943 ssh_runner.go:195] Run: which crictl
	I0912 23:07:21.100100   62943 logs.go:123] Gathering logs for dmesg ...
	I0912 23:07:21.100125   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:07:21.113873   62943 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:07:21.113906   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:07:21.215199   62943 logs.go:123] Gathering logs for kube-apiserver [3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416] ...
	I0912 23:07:21.215228   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c73944a510413add414948e1c8fd8d71ef732d91b7ddc96646b7ba376f81416"
	I0912 23:07:21.266873   62943 logs.go:123] Gathering logs for kube-apiserver [00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3] ...
	I0912 23:07:21.266903   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00f124dff0f77cf23377830c9dc5e2a8676d1a40ebf70730705d537ba7a4b8d3"
	I0912 23:07:21.307509   62943 logs.go:123] Gathering logs for storage-provisioner [3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713] ...
	I0912 23:07:21.307537   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d117ed77ba5f8ccc22fe496a5f719999fd9e893623ba84686634f7b92c09713"
	I0912 23:07:21.349480   62943 logs.go:123] Gathering logs for kubelet ...
	I0912 23:07:21.349505   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:07:21.428721   62943 logs.go:123] Gathering logs for kube-scheduler [3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec] ...
	I0912 23:07:21.428754   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3187fdef2bd318bb335be822a4e75cfd0ac02a49607d8e398ab18102a83437ec"
	I0912 23:07:21.469645   62943 logs.go:123] Gathering logs for kube-proxy [4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37] ...
	I0912 23:07:21.469677   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4807559910184befc0b6a9ccb40b1cb9d4997f6b1405b76ba67049ce730f37"
	I0912 23:07:21.517502   62943 logs.go:123] Gathering logs for kube-controller-manager [eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0] ...
	I0912 23:07:21.517529   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb473fa0b2d91e95a8716ca3d1bad44b431a3dfbfae11984657c16c7392b58f0"
	I0912 23:07:21.582523   62943 logs.go:123] Gathering logs for coredns [e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189] ...
	I0912 23:07:21.582556   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d289c9afefef6dd746277d09550e5c7d708d7c1ba9f74b2dd91300f807189"
	I0912 23:07:21.623846   62943 logs.go:123] Gathering logs for storage-provisioner [d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a] ...
	I0912 23:07:21.623885   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d40483dfc659456939d7965c1ecf1113c1c38e26fe985bf19f26a0123206ea3a"
	I0912 23:07:21.670643   62943 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:07:21.670675   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:07:20.238639   61354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 23:07:20.248752   61354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 23:07:20.269785   61354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 23:07:20.269853   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.269874   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-702201 minikube.k8s.io/updated_at=2024_09_12T23_07_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=default-k8s-diff-port-702201 minikube.k8s.io/primary=true
	I0912 23:07:20.296361   61354 ops.go:34] apiserver oom_adj: -16
	I0912 23:07:20.492168   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:20.992549   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.492765   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:21.992850   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.492720   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:22.993154   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.493116   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:23.992629   61354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 23:07:24.077486   61354 kubeadm.go:1113] duration metric: took 3.807690368s to wait for elevateKubeSystemPrivileges
	I0912 23:07:24.077525   61354 kubeadm.go:394] duration metric: took 4m59.803121736s to StartCluster
	I0912 23:07:24.077547   61354 settings.go:142] acquiring lock: {Name:mk9c957feafb8d7ccd833ad0c106ef81ecfe5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.077652   61354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 23:07:24.080127   61354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5891/kubeconfig: {Name:mkffb46c3e9d2b8baebc7237b48bf41bccf1a52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:07:24.080453   61354 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 23:07:24.080486   61354 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:07:24.080582   61354 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080556   61354 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080594   61354 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-702201"
	I0912 23:07:24.080627   61354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-702201"
	I0912 23:07:24.080650   61354 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.080659   61354 addons.go:243] addon metrics-server should already be in state true
	I0912 23:07:24.080664   61354 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-702201"
	I0912 23:07:24.080691   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.080668   61354 config.go:182] Loaded profile config "default-k8s-diff-port-702201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0912 23:07:24.080691   61354 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:07:24.080830   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.081061   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081060   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081101   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081144   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081188   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.081214   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.081973   61354 out.go:177] * Verifying Kubernetes components...
	I0912 23:07:24.083133   61354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:07:24.097005   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0912 23:07:24.097025   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0912 23:07:24.097096   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0912 23:07:24.097438   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097464   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097525   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.097994   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098015   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098141   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098165   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098290   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.098309   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.098399   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098545   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098726   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.098731   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.098994   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099040   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.099251   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.099283   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.102412   61354 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-702201"
	W0912 23:07:24.102432   61354 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:07:24.102459   61354 host.go:66] Checking if "default-k8s-diff-port-702201" exists ...
	I0912 23:07:24.102797   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.102835   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.117429   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0912 23:07:24.117980   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.118513   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.118533   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.119059   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.119577   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0912 23:07:24.119621   61354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 23:07:24.119656   61354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 23:07:24.119717   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0912 23:07:24.120047   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120129   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.120532   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120553   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.120810   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.120834   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.121017   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121201   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.121216   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.121347   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.123069   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.123254   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.125055   61354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:07:24.125065   61354 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:07:22.059555   62943 logs.go:123] Gathering logs for container status ...
	I0912 23:07:22.059602   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:07:22.104001   62943 logs.go:123] Gathering logs for etcd [35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29] ...
	I0912 23:07:22.104039   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35282e97473f2f870f5366d221b9a40e0a456b6fd44fffe321c9fe160448cc29"
	I0912 23:07:22.146304   62943 logs.go:123] Gathering logs for kube-controller-manager [635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7] ...
	I0912 23:07:22.146342   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 635fd2c2a6dd2558d88447c857c1c59b9cf6883a458ab9f62fbcd48fc94048f7"
	I0912 23:07:24.689925   62943 system_pods.go:59] 8 kube-system pods found
	I0912 23:07:24.689959   62943 system_pods.go:61] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.689967   62943 system_pods.go:61] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.689974   62943 system_pods.go:61] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.689980   62943 system_pods.go:61] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.689987   62943 system_pods.go:61] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.689992   62943 system_pods.go:61] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.690002   62943 system_pods.go:61] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.690009   62943 system_pods.go:61] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.690020   62943 system_pods.go:74] duration metric: took 3.948819191s to wait for pod list to return data ...
	I0912 23:07:24.690031   62943 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:24.692936   62943 default_sa.go:45] found service account: "default"
	I0912 23:07:24.692964   62943 default_sa.go:55] duration metric: took 2.925808ms for default service account to be created ...
	I0912 23:07:24.692975   62943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:24.699123   62943 system_pods.go:86] 8 kube-system pods found
	I0912 23:07:24.699155   62943 system_pods.go:89] "coredns-7c65d6cfc9-twck7" [2fb00aff-8a30-4634-a804-1419eabfe727] Running
	I0912 23:07:24.699164   62943 system_pods.go:89] "etcd-no-preload-380092" [69b6be54-dd29-47c7-b990-a64335dd6d7b] Running
	I0912 23:07:24.699170   62943 system_pods.go:89] "kube-apiserver-no-preload-380092" [10ff70db-3c74-42ad-841d-d2241de4b98e] Running
	I0912 23:07:24.699176   62943 system_pods.go:89] "kube-controller-manager-no-preload-380092" [6e91c5b2-36fc-404e-9f09-c1bc9da46774] Running
	I0912 23:07:24.699182   62943 system_pods.go:89] "kube-proxy-z4rcx" [d17caa2e-d0fe-45e8-a96c-d1cc1b55e665] Running
	I0912 23:07:24.699187   62943 system_pods.go:89] "kube-scheduler-no-preload-380092" [5c634cac-6b28-4757-ba85-891c4c2fa34e] Running
	I0912 23:07:24.699197   62943 system_pods.go:89] "metrics-server-6867b74b74-4v7f5" [10c8c536-9ca6-4e75-96f2-7324f3d3d379] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:24.699206   62943 system_pods.go:89] "storage-provisioner" [f173a1f6-3772-4f08-8e40-2215cc9d2878] Running
	I0912 23:07:24.699220   62943 system_pods.go:126] duration metric: took 6.23727ms to wait for k8s-apps to be running ...
	I0912 23:07:24.699232   62943 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:24.699281   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:24.716425   62943 system_svc.go:56] duration metric: took 17.184595ms WaitForService to wait for kubelet
	I0912 23:07:24.716456   62943 kubeadm.go:582] duration metric: took 4m22.909700986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:24.716480   62943 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:24.719606   62943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:24.719632   62943 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:24.719645   62943 node_conditions.go:105] duration metric: took 3.158655ms to run NodePressure ...
	I0912 23:07:24.719660   62943 start.go:241] waiting for startup goroutines ...
	I0912 23:07:24.719669   62943 start.go:246] waiting for cluster config update ...
	I0912 23:07:24.719683   62943 start.go:255] writing updated cluster config ...
	I0912 23:07:24.719959   62943 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:24.782144   62943 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:24.783614   62943 out.go:177] * Done! kubectl is now configured to use "no-preload-380092" cluster and "default" namespace by default
	I0912 23:07:24.126360   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:07:24.126378   61354 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:07:24.126401   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.126445   61354 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.126458   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:07:24.126472   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.130177   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130678   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130719   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130730   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.130919   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.130949   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.131134   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131203   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.131447   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131494   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.131659   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131677   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.131817   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.131857   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.139030   61354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0912 23:07:24.139501   61354 main.go:141] libmachine: () Calling .GetVersion
	I0912 23:07:24.139949   61354 main.go:141] libmachine: Using API Version  1
	I0912 23:07:24.139973   61354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 23:07:24.140287   61354 main.go:141] libmachine: () Calling .GetMachineName
	I0912 23:07:24.140441   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetState
	I0912 23:07:24.141751   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .DriverName
	I0912 23:07:24.141942   61354 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.141957   61354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:07:24.141977   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHHostname
	I0912 23:07:24.144033   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:fd:fb", ip: ""} in network mk-default-k8s-diff-port-702201: {Iface:virbr1 ExpiryTime:2024-09-13 00:02:09 +0000 UTC Type:0 Mac:52:54:00:b4:fd:fb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:default-k8s-diff-port-702201 Clientid:01:52:54:00:b4:fd:fb}
	I0912 23:07:24.144563   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHPort
	I0912 23:07:24.144623   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | domain default-k8s-diff-port-702201 has defined IP address 192.168.39.214 and MAC address 52:54:00:b4:fd:fb in network mk-default-k8s-diff-port-702201
	I0912 23:07:24.144723   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHKeyPath
	I0912 23:07:24.145002   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .GetSSHUsername
	I0912 23:07:24.145132   61354 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/default-k8s-diff-port-702201/id_rsa Username:docker}
	I0912 23:07:24.279582   61354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:07:24.294072   61354 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304565   61354 node_ready.go:49] node "default-k8s-diff-port-702201" has status "Ready":"True"
	I0912 23:07:24.304588   61354 node_ready.go:38] duration metric: took 10.479351ms for node "default-k8s-diff-port-702201" to be "Ready" ...
	I0912 23:07:24.304599   61354 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:24.310618   61354 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:24.359086   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:07:24.390490   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:07:24.409964   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:07:24.409990   61354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:07:24.445852   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:07:24.445880   61354 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:07:24.502567   61354 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:24.502591   61354 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:07:24.578857   61354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:07:25.348387   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348415   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348715   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.348732   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.348740   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348748   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.348766   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.348869   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.348880   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349007   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349022   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349026   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349181   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349209   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349216   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.349224   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.349231   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.349497   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.349513   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.349520   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377320   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.377345   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.377662   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.377683   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.377685   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) DBG | Closing plugin on server side
	I0912 23:07:25.851960   61354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.273059994s)
	I0912 23:07:25.852019   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852037   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852373   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852398   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852408   61354 main.go:141] libmachine: Making call to close driver server
	I0912 23:07:25.852417   61354 main.go:141] libmachine: (default-k8s-diff-port-702201) Calling .Close
	I0912 23:07:25.852671   61354 main.go:141] libmachine: Successfully made call to close driver server
	I0912 23:07:25.852690   61354 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 23:07:25.852701   61354 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-702201"
	I0912 23:07:25.854523   61354 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0912 23:07:25.855764   61354 addons.go:510] duration metric: took 1.775274823s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0912 23:07:26.343219   61354 pod_ready.go:103] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:26.817338   61354 pod_ready.go:93] pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:26.817361   61354 pod_ready.go:82] duration metric: took 2.506720235s for pod "etcd-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:26.817371   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:28.823968   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:31.324504   61354 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"False"
	I0912 23:07:33.824198   61354 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.824218   61354 pod_ready.go:82] duration metric: took 7.006841754s for pod "kube-apiserver-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.824228   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829882   61354 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.829903   61354 pod_ready.go:82] duration metric: took 5.668963ms for pod "kube-controller-manager-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.829912   61354 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834773   61354 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace has status "Ready":"True"
	I0912 23:07:33.834796   61354 pod_ready.go:82] duration metric: took 4.8776ms for pod "kube-scheduler-default-k8s-diff-port-702201" in "kube-system" namespace to be "Ready" ...
	I0912 23:07:33.834805   61354 pod_ready.go:39] duration metric: took 9.530195098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:07:33.834819   61354 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:07:33.834864   61354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:07:33.850650   61354 api_server.go:72] duration metric: took 9.770155376s to wait for apiserver process to appear ...
	I0912 23:07:33.850671   61354 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:07:33.850686   61354 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8444/healthz ...
	I0912 23:07:33.855112   61354 api_server.go:279] https://192.168.39.214:8444/healthz returned 200:
	ok
	I0912 23:07:33.856195   61354 api_server.go:141] control plane version: v1.31.1
	I0912 23:07:33.856213   61354 api_server.go:131] duration metric: took 5.535983ms to wait for apiserver health ...
	I0912 23:07:33.856220   61354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:07:33.861385   61354 system_pods.go:59] 9 kube-system pods found
	I0912 23:07:33.861415   61354 system_pods.go:61] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.861422   61354 system_pods.go:61] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.861429   61354 system_pods.go:61] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.861435   61354 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.861440   61354 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.861451   61354 system_pods.go:61] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.861457   61354 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.861466   61354 system_pods.go:61] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.861476   61354 system_pods.go:61] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.861486   61354 system_pods.go:74] duration metric: took 5.260046ms to wait for pod list to return data ...
	I0912 23:07:33.861497   61354 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:07:33.864254   61354 default_sa.go:45] found service account: "default"
	I0912 23:07:33.864272   61354 default_sa.go:55] duration metric: took 2.766344ms for default service account to be created ...
	I0912 23:07:33.864280   61354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:07:33.869281   61354 system_pods.go:86] 9 kube-system pods found
	I0912 23:07:33.869310   61354 system_pods.go:89] "coredns-7c65d6cfc9-f5spz" [6a0f69e9-66eb-4e59-a173-1d6f638e2211] Running
	I0912 23:07:33.869315   61354 system_pods.go:89] "coredns-7c65d6cfc9-qhbgf" [0af4199f-b09c-4ab8-8170-b8941d3ece7a] Running
	I0912 23:07:33.869320   61354 system_pods.go:89] "etcd-default-k8s-diff-port-702201" [d8d2e9bb-c8de-4aac-9373-ac9b6d3ec96a] Running
	I0912 23:07:33.869324   61354 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-702201" [7c26cd67-e192-4e8c-a3e1-e7e76a87fae4] Running
	I0912 23:07:33.869328   61354 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-702201" [53553f06-02d5-4603-8418-6bf2ff7b6a25] Running
	I0912 23:07:33.869332   61354 system_pods.go:89] "kube-proxy-mv8ws" [51cb20c3-8445-4ce9-8484-5138f3d0ed57] Running
	I0912 23:07:33.869335   61354 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-702201" [cc25c635-37f2-4186-b5ea-958e95fc4ab2] Running
	I0912 23:07:33.869341   61354 system_pods.go:89] "metrics-server-6867b74b74-w2dvn" [778a4742-5b80-4485-956e-8f169e6dcf8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:07:33.869349   61354 system_pods.go:89] "storage-provisioner" [66bc6f77-b774-4478-80d0-a1027802e179] Running
	I0912 23:07:33.869362   61354 system_pods.go:126] duration metric: took 5.073128ms to wait for k8s-apps to be running ...
	I0912 23:07:33.869371   61354 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:07:33.869410   61354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:33.885244   61354 system_svc.go:56] duration metric: took 15.863852ms WaitForService to wait for kubelet
	I0912 23:07:33.885284   61354 kubeadm.go:582] duration metric: took 9.804792247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:07:33.885302   61354 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:07:33.889009   61354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0912 23:07:33.889041   61354 node_conditions.go:123] node cpu capacity is 2
	I0912 23:07:33.889054   61354 node_conditions.go:105] duration metric: took 3.746289ms to run NodePressure ...
	I0912 23:07:33.889069   61354 start.go:241] waiting for startup goroutines ...
	I0912 23:07:33.889079   61354 start.go:246] waiting for cluster config update ...
	I0912 23:07:33.889092   61354 start.go:255] writing updated cluster config ...
	I0912 23:07:33.889427   61354 ssh_runner.go:195] Run: rm -f paused
	I0912 23:07:33.940577   61354 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:07:33.942471   61354 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-702201" cluster and "default" namespace by default
	I0912 23:07:47.603025   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:07:47.603235   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:07:47.604779   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:47.604883   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:47.605084   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:47.605337   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:47.605566   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:47.605831   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:47.607788   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:47.607900   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:47.608013   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:47.608164   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:47.608343   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:47.608510   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:47.608593   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:47.608669   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:47.608742   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:47.608833   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:47.608899   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:47.608932   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:47.608991   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:47.609042   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:47.609118   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:47.609216   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:47.609310   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:47.609448   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:47.609540   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:47.609604   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:47.609731   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:47.611516   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:47.611622   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:47.611724   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:47.611811   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:47.611912   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:47.612092   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:07:47.612156   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:07:47.612234   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612485   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612557   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.612746   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.612836   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613060   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613145   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613347   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613406   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:07:47.613573   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:07:47.613583   62386 kubeadm.go:310] 
	I0912 23:07:47.613646   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:07:47.613700   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:07:47.613712   62386 kubeadm.go:310] 
	I0912 23:07:47.613756   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:07:47.613804   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:07:47.613912   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:07:47.613924   62386 kubeadm.go:310] 
	I0912 23:07:47.614027   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:07:47.614062   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:07:47.614110   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:07:47.614123   62386 kubeadm.go:310] 
	I0912 23:07:47.614256   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:07:47.614381   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:07:47.614393   62386 kubeadm.go:310] 
	I0912 23:07:47.614480   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:07:47.614626   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:07:47.614724   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:07:47.614825   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:07:47.614854   62386 kubeadm.go:310] 
	W0912 23:07:47.614957   62386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0912 23:07:47.615000   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0912 23:07:48.085695   62386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:07:48.100416   62386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 23:07:48.109607   62386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 23:07:48.109635   62386 kubeadm.go:157] found existing configuration files:
	
	I0912 23:07:48.109686   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 23:07:48.118174   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 23:07:48.118235   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 23:07:48.127100   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 23:07:48.135945   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 23:07:48.136006   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 23:07:48.145057   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.153832   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 23:07:48.153899   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 23:07:48.163261   62386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 23:07:48.172155   62386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 23:07:48.172208   62386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 23:07:48.181592   62386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 23:07:48.253671   62386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0912 23:07:48.253728   62386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 23:07:48.394463   62386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 23:07:48.394622   62386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 23:07:48.394773   62386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 23:07:48.581336   62386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 23:07:48.583286   62386 out.go:235]   - Generating certificates and keys ...
	I0912 23:07:48.583391   62386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 23:07:48.583461   62386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 23:07:48.583576   62386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0912 23:07:48.583668   62386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0912 23:07:48.583751   62386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0912 23:07:48.583830   62386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0912 23:07:48.583935   62386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0912 23:07:48.584060   62386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0912 23:07:48.584176   62386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0912 23:07:48.584291   62386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0912 23:07:48.584349   62386 kubeadm.go:310] [certs] Using the existing "sa" key
	I0912 23:07:48.584433   62386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 23:07:48.823726   62386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 23:07:49.148359   62386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 23:07:49.679842   62386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 23:07:50.116403   62386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 23:07:50.137409   62386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 23:07:50.137512   62386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 23:07:50.137586   62386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 23:07:50.279387   62386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 23:07:50.281202   62386 out.go:235]   - Booting up control plane ...
	I0912 23:07:50.281311   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 23:07:50.284914   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 23:07:50.285938   62386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 23:07:50.286646   62386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 23:07:50.288744   62386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 23:08:30.291301   62386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0912 23:08:30.291387   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:30.291586   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:35.292084   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:35.292299   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:08:45.293141   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:08:45.293363   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:05.293977   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:05.294218   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292498   62386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0912 23:09:45.292713   62386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0912 23:09:45.292752   62386 kubeadm.go:310] 
	I0912 23:09:45.292839   62386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0912 23:09:45.292884   62386 kubeadm.go:310] 		timed out waiting for the condition
	I0912 23:09:45.292892   62386 kubeadm.go:310] 
	I0912 23:09:45.292944   62386 kubeadm.go:310] 	This error is likely caused by:
	I0912 23:09:45.292998   62386 kubeadm.go:310] 		- The kubelet is not running
	I0912 23:09:45.293153   62386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0912 23:09:45.293165   62386 kubeadm.go:310] 
	I0912 23:09:45.293277   62386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0912 23:09:45.293333   62386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0912 23:09:45.293361   62386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0912 23:09:45.293378   62386 kubeadm.go:310] 
	I0912 23:09:45.293528   62386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0912 23:09:45.293668   62386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0912 23:09:45.293679   62386 kubeadm.go:310] 
	I0912 23:09:45.293840   62386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0912 23:09:45.293962   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0912 23:09:45.294033   62386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0912 23:09:45.294142   62386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0912 23:09:45.294155   62386 kubeadm.go:310] 
	I0912 23:09:45.294801   62386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 23:09:45.294914   62386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0912 23:09:45.295004   62386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0912 23:09:45.295097   62386 kubeadm.go:394] duration metric: took 7m57.408601522s to StartCluster
	I0912 23:09:45.295168   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:09:45.295233   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:09:45.336726   62386 cri.go:89] found id: ""
	I0912 23:09:45.336767   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.336777   62386 logs.go:278] No container was found matching "kube-apiserver"
	I0912 23:09:45.336785   62386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 23:09:45.336847   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:09:45.374528   62386 cri.go:89] found id: ""
	I0912 23:09:45.374555   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.374576   62386 logs.go:278] No container was found matching "etcd"
	I0912 23:09:45.374584   62386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 23:09:45.374649   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:09:45.409321   62386 cri.go:89] found id: ""
	I0912 23:09:45.409462   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.409497   62386 logs.go:278] No container was found matching "coredns"
	I0912 23:09:45.409508   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:09:45.409582   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:09:45.442204   62386 cri.go:89] found id: ""
	I0912 23:09:45.442228   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.442238   62386 logs.go:278] No container was found matching "kube-scheduler"
	I0912 23:09:45.442279   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:09:45.442339   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:09:45.478874   62386 cri.go:89] found id: ""
	I0912 23:09:45.478897   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.478904   62386 logs.go:278] No container was found matching "kube-proxy"
	I0912 23:09:45.478909   62386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:09:45.478961   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:09:45.520162   62386 cri.go:89] found id: ""
	I0912 23:09:45.520191   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.520199   62386 logs.go:278] No container was found matching "kube-controller-manager"
	I0912 23:09:45.520205   62386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 23:09:45.520251   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:09:45.551580   62386 cri.go:89] found id: ""
	I0912 23:09:45.551611   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.551622   62386 logs.go:278] No container was found matching "kindnet"
	I0912 23:09:45.551629   62386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:09:45.551693   62386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:09:45.585468   62386 cri.go:89] found id: ""
	I0912 23:09:45.585498   62386 logs.go:276] 0 containers: []
	W0912 23:09:45.585505   62386 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0912 23:09:45.585514   62386 logs.go:123] Gathering logs for kubelet ...
	I0912 23:09:45.585525   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 23:09:45.640731   62386 logs.go:123] Gathering logs for dmesg ...
	I0912 23:09:45.640782   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:09:45.656797   62386 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:09:45.656833   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 23:09:45.735064   62386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 23:09:45.735083   62386 logs.go:123] Gathering logs for CRI-O ...
	I0912 23:09:45.735100   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 23:09:45.848695   62386 logs.go:123] Gathering logs for container status ...
	I0912 23:09:45.848739   62386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0912 23:09:45.907495   62386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0912 23:09:45.907561   62386 out.go:270] * 
	W0912 23:09:45.907628   62386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.907646   62386 out.go:270] * 
	W0912 23:09:45.908494   62386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:09:45.911502   62386 out.go:201] 
	W0912 23:09:45.912387   62386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0912 23:09:45.912424   62386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0912 23:09:45.912442   62386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0912 23:09:45.913632   62386 out.go:201] 
	
	
	==> CRI-O <==
	Sep 12 23:22:07 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:07.992483519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183327992414516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bd2850f-c50f-4d66-b8e9-119867c9b246 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:07 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:07.993080943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c732e7cc-9611-4ef6-8015-84b3e563d205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:07 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:07.993155398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c732e7cc-9611-4ef6-8015-84b3e563d205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:07 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:07.993226053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c732e7cc-9611-4ef6-8015-84b3e563d205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.028930729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1bee20f-aea7-4efd-9444-6b59fb08fd6c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.029022325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1bee20f-aea7-4efd-9444-6b59fb08fd6c name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.030261170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d0dd8f5-e6c8-4449-bf24-e2366fe529b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.030675082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183328030651877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d0dd8f5-e6c8-4449-bf24-e2366fe529b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.031249508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=227b4f1e-16f6-43bc-bc17-a8dfb3f3899d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.031300928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=227b4f1e-16f6-43bc-bc17-a8dfb3f3899d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.031335476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=227b4f1e-16f6-43bc-bc17-a8dfb3f3899d name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.063040650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3ff4fe4-c788-430b-b8ef-a02c010ed006 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.063128786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3ff4fe4-c788-430b-b8ef-a02c010ed006 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.064548220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be20ff41-40bf-45c5-be6a-6f2b9909d838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.064989294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183328064961101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be20ff41-40bf-45c5-be6a-6f2b9909d838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.065569348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=253435bb-9f07-490b-9846-cf0d664ef277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.065625165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=253435bb-9f07-490b-9846-cf0d664ef277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.065667798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=253435bb-9f07-490b-9846-cf0d664ef277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.095963597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=938a4d74-45b4-4fa3-9f8b-f74215d24e05 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.096070994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=938a4d74-45b4-4fa3-9f8b-f74215d24e05 name=/runtime.v1.RuntimeService/Version
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.097057549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b008f1a-a68c-4da2-89ba-ea7f52feb387 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.097556754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726183328097522612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b008f1a-a68c-4da2-89ba-ea7f52feb387 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.098104140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=643031b4-41b7-4323-8804-49d57db6c0fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.098230423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=643031b4-41b7-4323-8804-49d57db6c0fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 12 23:22:08 old-k8s-version-642238 crio[632]: time="2024-09-12 23:22:08.098265517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=643031b4-41b7-4323-8804-49d57db6c0fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep12 23:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050669] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.881907] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.909528] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.539678] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094180] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.073198] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070849] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.223496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.134982] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.261562] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.482703] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067645] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.600190] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Sep12 23:02] kauditd_printk_skb: 46 callbacks suppressed
	[Sep12 23:05] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Sep12 23:07] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.064469] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:22:08 up 20 min,  0 users,  load average: 0.00, 0.00, 0.01
	Linux old-k8s-version-642238 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005ddef0)
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006d7ef0, 0x4f0ac20, 0xc000d62af0, 0x1, 0xc0000a60c0)
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009842a0, 0xc0000a60c0)
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c84da0, 0xc000c878c0)
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 12 23:22:06 old-k8s-version-642238 kubelet[6889]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 12 23:22:06 old-k8s-version-642238 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 12 23:22:06 old-k8s-version-642238 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 12 23:22:07 old-k8s-version-642238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 149.
	Sep 12 23:22:07 old-k8s-version-642238 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 12 23:22:07 old-k8s-version-642238 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 12 23:22:07 old-k8s-version-642238 kubelet[6917]: I0912 23:22:07.679959    6917 server.go:416] Version: v1.20.0
	Sep 12 23:22:07 old-k8s-version-642238 kubelet[6917]: I0912 23:22:07.680566    6917 server.go:837] Client rotation is on, will bootstrap in background
	Sep 12 23:22:07 old-k8s-version-642238 kubelet[6917]: I0912 23:22:07.683728    6917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 12 23:22:07 old-k8s-version-642238 kubelet[6917]: I0912 23:22:07.684972    6917 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 12 23:22:07 old-k8s-version-642238 kubelet[6917]: W0912 23:22:07.684983    6917 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 2 (223.324875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-642238" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (196.71s)

                                                
                                    

Test pass (254/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 36.59
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 16.14
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 82.4
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 139.43
31 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/parallel/InspektorGadget 11.74
37 TestAddons/parallel/HelmTiller 16.05
39 TestAddons/parallel/CSI 46.82
40 TestAddons/parallel/Headlamp 20.88
41 TestAddons/parallel/CloudSpanner 6.56
42 TestAddons/parallel/LocalPath 63.07
43 TestAddons/parallel/NvidiaDevicePlugin 6.48
44 TestAddons/parallel/Yakd 10.71
45 TestAddons/StoppedEnableDisable 7.55
46 TestCertOptions 96.21
47 TestCertExpiration 265.75
49 TestForceSystemdFlag 44.47
50 TestForceSystemdEnv 63.46
52 TestKVMDriverInstallOrUpdate 4.26
56 TestErrorSpam/setup 38.33
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.74
59 TestErrorSpam/pause 1.55
60 TestErrorSpam/unpause 1.61
61 TestErrorSpam/stop 5.14
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.53
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 60.21
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.62
73 TestFunctional/serial/CacheCmd/cache/add_local 2.09
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 276.91
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.09
84 TestFunctional/serial/LogsFileCmd 1.13
85 TestFunctional/serial/InvalidService 4.01
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 28.19
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.9
95 TestFunctional/parallel/ServiceCmdConnect 10.82
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 47.76
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.28
101 TestFunctional/parallel/MySQL 22.1
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.32
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
111 TestFunctional/parallel/License 0.62
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.94
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.71
119 TestFunctional/parallel/ImageCommands/Setup 1.75
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.16
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.35
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.3
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.47
140 TestFunctional/parallel/ServiceCmd/List 0.3
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
143 TestFunctional/parallel/ServiceCmd/Format 0.4
144 TestFunctional/parallel/ServiceCmd/URL 0.36
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
146 TestFunctional/parallel/MountCmd/any-port 19.41
147 TestFunctional/parallel/ProfileCmd/profile_list 0.28
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
149 TestFunctional/parallel/MountCmd/specific-port 2.08
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 194.03
158 TestMultiControlPlane/serial/DeployApp 6.76
159 TestMultiControlPlane/serial/PingHostFromPods 1.21
160 TestMultiControlPlane/serial/AddWorkerNode 54.93
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.68
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.71
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 379.96
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 74.83
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 48.17
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.67
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 88.71
211 TestMountStart/serial/StartWithMountFirst 27.93
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 27.88
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 23.27
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 112.78
223 TestMultiNode/serial/DeployApp2Nodes 5.71
224 TestMultiNode/serial/PingHostFrom2Pods 0.74
225 TestMultiNode/serial/AddNode 47.29
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.1
229 TestMultiNode/serial/StopNode 2.21
230 TestMultiNode/serial/StartAfterStop 38.89
232 TestMultiNode/serial/DeleteNode 2
234 TestMultiNode/serial/RestartMultiNode 177.03
235 TestMultiNode/serial/ValidateNameConflict 42.83
242 TestScheduledStopUnix 110.18
246 TestRunningBinaryUpgrade 210.64
250 TestStoppedBinaryUpgrade/Setup 2.89
251 TestStoppedBinaryUpgrade/Upgrade 170.73
260 TestPause/serial/Start 72.62
261 TestPause/serial/SecondStartNoReconfiguration 38.5
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
265 TestNoKubernetes/serial/StartWithK8s 45.72
266 TestPause/serial/Pause 0.72
267 TestPause/serial/VerifyStatus 0.25
268 TestPause/serial/Unpause 0.69
269 TestPause/serial/PauseAgain 0.88
270 TestPause/serial/DeletePaused 1.04
271 TestPause/serial/VerifyDeletedResources 3.21
279 TestNetworkPlugins/group/false 5.69
283 TestNoKubernetes/serial/StartWithStopK8s 50.83
284 TestNoKubernetes/serial/Start 44.2
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
286 TestNoKubernetes/serial/ProfileList 6.52
287 TestNoKubernetes/serial/Stop 1.3
288 TestNoKubernetes/serial/StartNoArgs 58.14
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.11
295 TestStartStop/group/embed-certs/serial/FirstStart 116.78
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.31
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
300 TestStartStop/group/newest-cni/serial/FirstStart 45.03
301 TestStartStop/group/embed-certs/serial/DeployApp 10.28
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
306 TestStartStop/group/newest-cni/serial/Stop 2.29
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
308 TestStartStop/group/newest-cni/serial/SecondStart 36.53
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
312 TestStartStop/group/newest-cni/serial/Pause 2.29
314 TestStartStop/group/no-preload/serial/FirstStart 100.01
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 671.97
320 TestStartStop/group/embed-certs/serial/SecondStart 522.74
321 TestStartStop/group/no-preload/serial/DeployApp 10.28
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
324 TestStartStop/group/old-k8s-version/serial/Stop 3.28
325 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
328 TestStartStop/group/no-preload/serial/SecondStart 423.22
337 TestNetworkPlugins/group/auto/Start 52.62
338 TestNetworkPlugins/group/kindnet/Start 84.5
339 TestNetworkPlugins/group/calico/Start 106.03
340 TestNetworkPlugins/group/auto/KubeletFlags 0.2
341 TestNetworkPlugins/group/auto/NetCatPod 13.23
342 TestNetworkPlugins/group/auto/DNS 0.17
343 TestNetworkPlugins/group/auto/Localhost 0.13
344 TestNetworkPlugins/group/auto/HairPin 0.14
345 TestNetworkPlugins/group/custom-flannel/Start 72.89
346 TestNetworkPlugins/group/kindnet/ControllerPod 6
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
348 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
349 TestNetworkPlugins/group/kindnet/DNS 0.18
350 TestNetworkPlugins/group/kindnet/Localhost 0.14
351 TestNetworkPlugins/group/kindnet/HairPin 0.17
352 TestNetworkPlugins/group/flannel/Start 64.81
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.2
355 TestNetworkPlugins/group/calico/NetCatPod 11.25
356 TestNetworkPlugins/group/calico/DNS 0.15
357 TestNetworkPlugins/group/calico/Localhost 0.13
358 TestNetworkPlugins/group/calico/HairPin 0.14
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
361 TestNetworkPlugins/group/custom-flannel/DNS 0.18
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
364 TestNetworkPlugins/group/enable-default-cni/Start 94.73
365 TestNetworkPlugins/group/bridge/Start 101.23
366 TestNetworkPlugins/group/flannel/ControllerPod 6
367 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
368 TestNetworkPlugins/group/flannel/NetCatPod 11.21
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
370 TestStartStop/group/no-preload/serial/Pause 3.27
371 TestNetworkPlugins/group/flannel/DNS 0.16
372 TestNetworkPlugins/group/flannel/Localhost 0.13
373 TestNetworkPlugins/group/flannel/HairPin 0.15
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
380 TestNetworkPlugins/group/bridge/NetCatPod 11.22
381 TestNetworkPlugins/group/bridge/DNS 0.14
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (36.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-618378 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-618378 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (36.584749318s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (36.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-618378
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-618378: exit status 85 (57.744425ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |          |
	|         | -p download-only-618378        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:28:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:28:53.226858   13095 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:28:53.227114   13095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:53.227128   13095 out.go:358] Setting ErrFile to fd 2...
	I0912 21:28:53.227133   13095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:53.227375   13095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	W0912 21:28:53.227531   13095 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-5891/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-5891/.minikube/config/config.json: no such file or directory
	I0912 21:28:53.228133   13095 out.go:352] Setting JSON to true
	I0912 21:28:53.229047   13095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":675,"bootTime":1726175858,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:28:53.229102   13095 start.go:139] virtualization: kvm guest
	I0912 21:28:53.231565   13095 out.go:97] [download-only-618378] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:28:53.231695   13095 notify.go:220] Checking for updates...
	W0912 21:28:53.231743   13095 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:28:53.233226   13095 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:28:53.234868   13095 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:28:53.236610   13095 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:28:53.238470   13095 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:28:53.240233   13095 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:28:53.243302   13095 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:28:53.243614   13095 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:28:53.350136   13095 out.go:97] Using the kvm2 driver based on user configuration
	I0912 21:28:53.350174   13095 start.go:297] selected driver: kvm2
	I0912 21:28:53.350180   13095 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:28:53.350577   13095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:28:53.350727   13095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:28:53.365926   13095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:28:53.365975   13095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:28:53.366473   13095 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 21:28:53.366637   13095 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:28:53.366696   13095 cni.go:84] Creating CNI manager for ""
	I0912 21:28:53.366709   13095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:28:53.366718   13095 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:28:53.366760   13095 start.go:340] cluster config:
	{Name:download-only-618378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-618378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:28:53.366924   13095 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:28:53.369035   13095 out.go:97] Downloading VM boot image ...
	I0912 21:28:53.369077   13095 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0912 21:29:11.973803   13095 out.go:97] Starting "download-only-618378" primary control-plane node in "download-only-618378" cluster
	I0912 21:29:11.973829   13095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 21:29:12.069853   13095 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:12.069883   13095 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:12.070059   13095 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0912 21:29:12.071569   13095 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 21:29:12.071605   13095 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:29:12.617433   13095 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:27.916929   13095 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:29:27.917041   13095 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-618378 host does not exist
	  To start a cluster, run: "minikube start -p download-only-618378"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-618378
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (16.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-976166 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-976166 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.141674305s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (16.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-976166
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-976166: exit status 85 (55.537346ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p download-only-618378        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| delete  | -p download-only-618378        | download-only-618378 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| start   | -o=json --download-only        | download-only-976166 | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	|         | -p download-only-976166        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:30.141921   13385 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:30.142162   13385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:30.142171   13385 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:30.142175   13385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:30.142360   13385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:29:30.142913   13385 out.go:352] Setting JSON to true
	I0912 21:29:30.143732   13385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":712,"bootTime":1726175858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:30.143791   13385 start.go:139] virtualization: kvm guest
	I0912 21:29:30.145600   13385 out.go:97] [download-only-976166] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:29:30.145758   13385 notify.go:220] Checking for updates...
	I0912 21:29:30.147197   13385 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:29:30.148997   13385 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:30.150342   13385 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:29:30.151557   13385 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:29:30.152788   13385 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:29:30.155251   13385 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:29:30.155474   13385 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:30.189636   13385 out.go:97] Using the kvm2 driver based on user configuration
	I0912 21:29:30.189678   13385 start.go:297] selected driver: kvm2
	I0912 21:29:30.189690   13385 start.go:901] validating driver "kvm2" against <nil>
	I0912 21:29:30.190004   13385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:30.190074   13385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19616-5891/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 21:29:30.205587   13385 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0912 21:29:30.205662   13385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:30.206125   13385 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 21:29:30.206297   13385 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:29:30.206390   13385 cni.go:84] Creating CNI manager for ""
	I0912 21:29:30.206406   13385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0912 21:29:30.206417   13385 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:30.206480   13385 start.go:340] cluster config:
	{Name:download-only-976166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-976166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:30.206592   13385 iso.go:125] acquiring lock: {Name:mk3ec3c4afd4210b7425f6425f55e7f581d9a5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:29:30.208284   13385 out.go:97] Starting "download-only-976166" primary control-plane node in "download-only-976166" cluster
	I0912 21:29:30.208313   13385 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:29:30.383990   13385 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:30.384027   13385 cache.go:56] Caching tarball of preloaded images
	I0912 21:29:30.384204   13385 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0912 21:29:30.385762   13385 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0912 21:29:30.385789   13385 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:29:30.939442   13385 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:29:44.575980   13385 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:29:44.576077   13385 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-5891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-976166 host does not exist
	  To start a cluster, run: "minikube start -p download-only-976166"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-976166
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-318498 --alsologtostderr --binary-mirror http://127.0.0.1:39999 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-318498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-318498
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (82.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-640332 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-640332 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.515663194s)
helpers_test.go:175: Cleaning up "offline-crio-640332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-640332
--- PASS: TestOffline (82.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-694635
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-694635: exit status 85 (46.737787ms)

                                                
                                                
-- stdout --
	* Profile "addons-694635" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694635"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-694635
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-694635: exit status 85 (47.441226ms)

                                                
                                                
-- stdout --
	* Profile "addons-694635" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694635"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-694635 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-694635 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.434397463s)
--- PASS: TestAddons/Setup (139.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-694635 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-694635 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qsm4r" [52bd39fd-3980-4aad-adbf-1b702ca88ea1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005311222s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-694635
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-694635: (5.736647349s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.44471ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-p44jv" [493da69b-8cdb-4ada-9f27-2c322311853b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004688016s
addons_test.go:475: (dbg) Run:  kubectl --context addons-694635 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-694635 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.295896216s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.648603ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-694635 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-694635 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9067dbc9-a44a-4c2a-a3c5-e5d0b0f4d2e6] Pending
helpers_test.go:344: "task-pv-pod" [9067dbc9-a44a-4c2a-a3c5-e5d0b0f4d2e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9067dbc9-a44a-4c2a-a3c5-e5d0b0f4d2e6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005700024s
addons_test.go:590: (dbg) Run:  kubectl --context addons-694635 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-694635 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-694635 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-694635 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-694635 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-694635 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-694635 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d3ab9526-f86e-4689-a902-fe163bee32e1] Pending
helpers_test.go:344: "task-pv-pod-restore" [d3ab9526-f86e-4689-a902-fe163bee32e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d3ab9526-f86e-4689-a902-fe163bee32e1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.004064835s
addons_test.go:632: (dbg) Run:  kubectl --context addons-694635 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-694635 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-694635 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.865526212s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable volumesnapshots --alsologtostderr -v=1: (1.037458235s)
--- PASS: TestAddons/parallel/CSI (46.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-694635 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-694635 --alsologtostderr -v=1: (1.016048629s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-gbwsp" [b6b64083-50c5-47d4-b545-951a5c96e064] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-gbwsp" [b6b64083-50c5-47d4-b545-951a5c96e064] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-gbwsp" [b6b64083-50c5-47d4-b545-951a5c96e064] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004225161s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable headlamp --alsologtostderr -v=1: (5.859213691s)
--- PASS: TestAddons/parallel/Headlamp (20.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-jx7rg" [928a6031-dd0b-45cd-9f56-1233a79e5488] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004188192s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-694635
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (63.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-694635 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-694635 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d04e8b66-a70e-4773-8d5c-d899091a5e16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d04e8b66-a70e-4773-8d5c-d899091a5e16] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d04e8b66-a70e-4773-8d5c-d899091a5e16] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.014961605s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-694635 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 ssh "cat /opt/local-path-provisioner/pvc-ce6ed7db-1ee2-4cee-8aae-8a13248846f5_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-694635 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-694635 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.18155975s)
--- PASS: TestAddons/parallel/LocalPath (63.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n59wh" [2647ba3c-226b-4e7f-bbb9-442fbceab2f4] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004169213s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-694635
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ngc54" [41a5d296-f98d-4501-8817-bc887fb663a0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004447724s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-694635 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-694635 addons disable yakd --alsologtostderr -v=1: (5.705650921s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-694635
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-694635: (7.287757156s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-694635
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-694635
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-694635
--- PASS: TestAddons/StoppedEnableDisable (7.55s)

                                                
                                    
x
+
TestCertOptions (96.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-689966 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-689966 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m34.786640523s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-689966 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-689966 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-689966 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-689966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-689966
--- PASS: TestCertOptions (96.21s)

                                                
                                    
x
+
TestCertExpiration (265.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-408779 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0912 22:50:05.704164   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-408779 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.981051965s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-408779 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-408779 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.772677576s)
helpers_test.go:175: Cleaning up "cert-expiration-408779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-408779
--- PASS: TestCertExpiration (265.75s)

                                                
                                    
x
+
TestForceSystemdFlag (44.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-042278 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-042278 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.299502272s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-042278 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-042278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-042278
--- PASS: TestForceSystemdFlag (44.47s)

                                                
                                    
x
+
TestForceSystemdEnv (63.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-633513 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-633513 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.447011584s)
helpers_test.go:175: Cleaning up "force-systemd-env-633513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-633513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-633513: (1.007717598s)
--- PASS: TestForceSystemdEnv (63.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                    
x
+
TestErrorSpam/setup (38.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-432987 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-432987 --driver=kvm2  --container-runtime=crio
E0912 21:47:07.200702   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.207538   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.218945   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.240412   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.281831   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.363298   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.524879   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:07.846601   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:08.488723   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:09.770827   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:12.333894   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:17.455832   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:47:27.698060   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-432987 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-432987 --driver=kvm2  --container-runtime=crio: (38.326286135s)
--- PASS: TestErrorSpam/setup (38.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop: (1.608341986s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop: (1.514845102s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-432987 --log_dir /tmp/nospam-432987 stop: (2.020746413s)
--- PASS: TestErrorSpam/stop (5.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-5891/.minikube/files/etc/test/nested/copy/13083/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0912 21:47:48.179474   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:48:29.142215   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-657409 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.525077622s)
--- PASS: TestFunctional/serial/StartWithProxy (86.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (60.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --alsologtostderr -v=8
E0912 21:49:51.064297   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-657409 --alsologtostderr -v=8: (1m0.207544849s)
functional_test.go:663: soft start took 1m0.208400765s for "functional-657409" cluster.
--- PASS: TestFunctional/serial/SoftStart (60.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-657409 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:3.1: (1.158803659s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:3.3: (1.282193002s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 cache add registry.k8s.io/pause:latest: (1.175880501s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-657409 /tmp/TestFunctionalserialCacheCmdcacheadd_local631681848/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache add minikube-local-cache-test:functional-657409
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 cache add minikube-local-cache-test:functional-657409: (1.752446918s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache delete minikube-local-cache-test:functional-657409
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-657409
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.233532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 cache reload: (1.047483844s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 kubectl -- --context functional-657409 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-657409 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (276.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0912 21:52:07.201163   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 21:52:34.906195   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-657409 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m36.907410043s)
functional_test.go:761: restart took 4m36.907529347s for "functional-657409" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (276.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-657409 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 logs: (1.091447429s)
--- PASS: TestFunctional/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 logs --file /tmp/TestFunctionalserialLogsFileCmd3469298082/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 logs --file /tmp/TestFunctionalserialLogsFileCmd3469298082/001/logs.txt: (1.124866844s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-657409 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-657409
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-657409: exit status 115 (275.462911ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.239:32483 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-657409 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 config get cpus: exit status 14 (52.385723ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 config get cpus: exit status 14 (51.15152ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-657409 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-657409 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24682: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-657409 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.509355ms)

                                                
                                                
-- stdout --
	* [functional-657409] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:55:20.343116   24462 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:55:20.343364   24462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:20.343373   24462 out.go:358] Setting ErrFile to fd 2...
	I0912 21:55:20.343378   24462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:20.343550   24462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:55:20.344064   24462 out.go:352] Setting JSON to false
	I0912 21:55:20.344974   24462 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2262,"bootTime":1726175858,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:55:20.345032   24462 start.go:139] virtualization: kvm guest
	I0912 21:55:20.347322   24462 out.go:177] * [functional-657409] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:55:20.348695   24462 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:55:20.348701   24462 notify.go:220] Checking for updates...
	I0912 21:55:20.351748   24462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:55:20.353108   24462 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:55:20.354162   24462 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:20.355268   24462 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:55:20.356362   24462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:55:20.358071   24462 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:55:20.358706   24462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:20.358796   24462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:20.375477   24462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0912 21:55:20.375853   24462 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:20.376318   24462 main.go:141] libmachine: Using API Version  1
	I0912 21:55:20.376339   24462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:20.376709   24462 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:20.376881   24462 main.go:141] libmachine: (functional-657409) Calling .DriverName
	I0912 21:55:20.377129   24462 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:55:20.377434   24462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:20.377478   24462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:20.393392   24462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0912 21:55:20.393872   24462 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:20.394531   24462 main.go:141] libmachine: Using API Version  1
	I0912 21:55:20.394551   24462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:20.395208   24462 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:20.395411   24462 main.go:141] libmachine: (functional-657409) Calling .DriverName
	I0912 21:55:20.427664   24462 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 21:55:20.428761   24462 start.go:297] selected driver: kvm2
	I0912 21:55:20.428779   24462 start.go:901] validating driver "kvm2" against &{Name:functional-657409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-657409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:55:20.428905   24462 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:55:20.431225   24462 out.go:201] 
	W0912 21:55:20.432391   24462 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 21:55:20.433471   24462 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-657409 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-657409 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.524154ms)

                                                
                                                
-- stdout --
	* [functional-657409] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:55:20.613691   24558 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:55:20.613790   24558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:20.613797   24558 out.go:358] Setting ErrFile to fd 2...
	I0912 21:55:20.613802   24558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:55:20.614074   24558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 21:55:20.614563   24558 out.go:352] Setting JSON to false
	I0912 21:55:20.615746   24558 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2263,"bootTime":1726175858,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:55:20.615815   24558 start.go:139] virtualization: kvm guest
	I0912 21:55:20.618288   24558 out.go:177] * [functional-657409] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0912 21:55:20.619760   24558 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:55:20.619766   24558 notify.go:220] Checking for updates...
	I0912 21:55:20.621939   24558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:55:20.622920   24558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 21:55:20.623836   24558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 21:55:20.624716   24558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:55:20.625637   24558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:55:20.626999   24558 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 21:55:20.627459   24558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:20.627501   24558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:20.643116   24558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0912 21:55:20.643605   24558 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:20.644088   24558 main.go:141] libmachine: Using API Version  1
	I0912 21:55:20.644113   24558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:20.644549   24558 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:20.644767   24558 main.go:141] libmachine: (functional-657409) Calling .DriverName
	I0912 21:55:20.645032   24558 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:55:20.645389   24558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 21:55:20.645482   24558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 21:55:20.662629   24558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0912 21:55:20.663082   24558 main.go:141] libmachine: () Calling .GetVersion
	I0912 21:55:20.663651   24558 main.go:141] libmachine: Using API Version  1
	I0912 21:55:20.663674   24558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 21:55:20.663962   24558 main.go:141] libmachine: () Calling .GetMachineName
	I0912 21:55:20.664166   24558 main.go:141] libmachine: (functional-657409) Calling .DriverName
	I0912 21:55:20.699977   24558 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0912 21:55:20.701582   24558 start.go:297] selected driver: kvm2
	I0912 21:55:20.701608   24558 start.go:901] validating driver "kvm2" against &{Name:functional-657409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-657409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:55:20.701759   24558 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:55:20.704221   24558 out.go:201] 
	W0912 21:55:20.705576   24558 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 21:55:20.706681   24558 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-657409 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-657409 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d7smw" [cc69b232-c555-4533-b8c7-98561cc0c72d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d7smw" [cc69b232-c555-4533-b8c7-98561cc0c72d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003208895s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.239:32601
functional_test.go:1675: http://192.168.39.239:32601: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-d7smw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.239:32601
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7714ae98-d8b2-443a-aa06-69de40d79d6a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005227285s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-657409 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-657409 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-657409 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657409 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0237afec-b0cd-4b65-894c-5ff91d2a5c0a] Pending
helpers_test.go:344: "sp-pod" [0237afec-b0cd-4b65-894c-5ff91d2a5c0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0237afec-b0cd-4b65-894c-5ff91d2a5c0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003470385s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-657409 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-657409 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-657409 delete -f testdata/storage-provisioner/pod.yaml: (2.011517287s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657409 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a7e2039-8620-4515-8951-14f09d968808] Pending
helpers_test.go:344: "sp-pod" [5a7e2039-8620-4515-8951-14f09d968808] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a7e2039-8620-4515-8951-14f09d968808] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.005014512s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-657409 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh -n functional-657409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cp functional-657409:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd465620882/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh -n functional-657409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh -n functional-657409 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-657409 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-4z6h8" [d8a0f995-39b1-4c78-a6bf-ca6fa017b55f] Pending
helpers_test.go:344: "mysql-6cdb49bbb-4z6h8" [d8a0f995-39b1-4c78-a6bf-ca6fa017b55f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-4z6h8" [d8a0f995-39b1-4c78-a6bf-ca6fa017b55f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004358819s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657409 exec mysql-6cdb49bbb-4z6h8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-657409 exec mysql-6cdb49bbb-4z6h8 -- mysql -ppassword -e "show databases;": exit status 1 (160.325898ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-657409 exec mysql-6cdb49bbb-4z6h8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13083/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /etc/test/nested/copy/13083/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13083.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /etc/ssl/certs/13083.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13083.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /usr/share/ca-certificates/13083.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/130832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /etc/ssl/certs/130832.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/130832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /usr/share/ca-certificates/130832.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-657409 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "sudo systemctl is-active docker": exit status 1 (216.484419ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "sudo systemctl is-active containerd": exit status 1 (208.118584ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657409 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-657409
localhost/kicbase/echo-server:functional-657409
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657409 image ls --format short --alsologtostderr:
I0912 21:55:41.993971   25247 out.go:345] Setting OutFile to fd 1 ...
I0912 21:55:41.994233   25247 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:41.994244   25247 out.go:358] Setting ErrFile to fd 2...
I0912 21:55:41.994251   25247 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:41.994443   25247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
I0912 21:55:41.995006   25247 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:41.995122   25247 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:41.995501   25247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:41.995556   25247 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.011200   25247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
I0912 21:55:42.011645   25247 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.012255   25247 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.012281   25247 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.012658   25247 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.012863   25247 main.go:141] libmachine: (functional-657409) Calling .GetState
I0912 21:55:42.014776   25247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.014815   25247 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.030769   25247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41249
I0912 21:55:42.031248   25247 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.031783   25247 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.031808   25247 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.032090   25247 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.032272   25247 main.go:141] libmachine: (functional-657409) Calling .DriverName
I0912 21:55:42.032547   25247 ssh_runner.go:195] Run: systemctl --version
I0912 21:55:42.032585   25247 main.go:141] libmachine: (functional-657409) Calling .GetSSHHostname
I0912 21:55:42.035707   25247 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.036117   25247 main.go:141] libmachine: (functional-657409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:3d:48", ip: ""} in network mk-functional-657409: {Iface:virbr1 ExpiryTime:2024-09-12 22:48:01 +0000 UTC Type:0 Mac:52:54:00:38:3d:48 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-657409 Clientid:01:52:54:00:38:3d:48}
I0912 21:55:42.036148   25247 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined IP address 192.168.39.239 and MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.036295   25247 main.go:141] libmachine: (functional-657409) Calling .GetSSHPort
I0912 21:55:42.036510   25247 main.go:141] libmachine: (functional-657409) Calling .GetSSHKeyPath
I0912 21:55:42.036660   25247 main.go:141] libmachine: (functional-657409) Calling .GetSSHUsername
I0912 21:55:42.036832   25247 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/functional-657409/id_rsa Username:docker}
I0912 21:55:42.151606   25247 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 21:55:42.320395   25247 main.go:141] libmachine: Making call to close driver server
I0912 21:55:42.320410   25247 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:42.320689   25247 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:42.320735   25247 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:42.320743   25247 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:42.320756   25247 main.go:141] libmachine: Making call to close driver server
I0912 21:55:42.320764   25247 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:42.321066   25247 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:42.321085   25247 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:42.321116   25247 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657409 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| localhost/minikube-local-cache-test     | functional-657409  | b480b9e76883b | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/kicbase/echo-server           | functional-657409  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657409 image ls --format table --alsologtostderr:
I0912 21:55:43.229887   25472 out.go:345] Setting OutFile to fd 1 ...
I0912 21:55:43.229992   25472 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:43.229997   25472 out.go:358] Setting ErrFile to fd 2...
I0912 21:55:43.230002   25472 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:43.230187   25472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
I0912 21:55:43.230729   25472 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:43.230820   25472 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:43.231164   25472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:43.231211   25472 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:43.246107   25472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
I0912 21:55:43.246564   25472 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:43.247111   25472 main.go:141] libmachine: Using API Version  1
I0912 21:55:43.247137   25472 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:43.247421   25472 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:43.247587   25472 main.go:141] libmachine: (functional-657409) Calling .GetState
I0912 21:55:43.249263   25472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:43.249311   25472 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:43.263887   25472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
I0912 21:55:43.264338   25472 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:43.264759   25472 main.go:141] libmachine: Using API Version  1
I0912 21:55:43.264780   25472 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:43.265099   25472 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:43.265271   25472 main.go:141] libmachine: (functional-657409) Calling .DriverName
I0912 21:55:43.265482   25472 ssh_runner.go:195] Run: systemctl --version
I0912 21:55:43.265510   25472 main.go:141] libmachine: (functional-657409) Calling .GetSSHHostname
I0912 21:55:43.268100   25472 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:43.268431   25472 main.go:141] libmachine: (functional-657409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:3d:48", ip: ""} in network mk-functional-657409: {Iface:virbr1 ExpiryTime:2024-09-12 22:48:01 +0000 UTC Type:0 Mac:52:54:00:38:3d:48 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-657409 Clientid:01:52:54:00:38:3d:48}
I0912 21:55:43.268461   25472 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined IP address 192.168.39.239 and MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:43.268558   25472 main.go:141] libmachine: (functional-657409) Calling .GetSSHPort
I0912 21:55:43.268735   25472 main.go:141] libmachine: (functional-657409) Calling .GetSSHKeyPath
I0912 21:55:43.268863   25472 main.go:141] libmachine: (functional-657409) Calling .GetSSHUsername
I0912 21:55:43.269081   25472 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/functional-657409/id_rsa Username:docker}
I0912 21:55:43.343821   25472 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 21:55:43.379595   25472 main.go:141] libmachine: Making call to close driver server
I0912 21:55:43.379615   25472 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:43.379876   25472 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:43.379898   25472 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:43.379913   25472 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:43.379919   25472 main.go:141] libmachine: Making call to close driver server
I0912 21:55:43.380019   25472 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:43.380339   25472 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:43.380366   25472 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:43.380342   25472 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657409 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"b480b9e76883b2edced2d125c9bd41f997265d1f338349c193657333331fbff0","repoDigests":["localhost/minikube-local-cache-test@sha256:bb1def27b55c2dc7e20118f9edd10be481c63f21a5d793ce0df5ca9218c482e6"],"repoTags":["localhost/minikube-local-cache-test:functional-657409"],"size":"3330"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e
5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{
"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a
63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["regist
ry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":
"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-657409"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pa
use@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657409 image ls --format json --alsologtostderr:
I0912 21:55:42.971257   25449 out.go:345] Setting OutFile to fd 1 ...
I0912 21:55:42.971389   25449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.971402   25449 out.go:358] Setting ErrFile to fd 2...
I0912 21:55:42.971408   25449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.971581   25449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
I0912 21:55:42.972085   25449 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.972176   25449 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.972547   25449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.972593   25449 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.987861   25449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
I0912 21:55:42.988296   25449 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.988823   25449 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.988858   25449 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.989325   25449 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.989565   25449 main.go:141] libmachine: (functional-657409) Calling .GetState
I0912 21:55:42.991640   25449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.991693   25449 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:43.008736   25449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
I0912 21:55:43.009132   25449 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:43.009671   25449 main.go:141] libmachine: Using API Version  1
I0912 21:55:43.009694   25449 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:43.010048   25449 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:43.010258   25449 main.go:141] libmachine: (functional-657409) Calling .DriverName
I0912 21:55:43.010470   25449 ssh_runner.go:195] Run: systemctl --version
I0912 21:55:43.010497   25449 main.go:141] libmachine: (functional-657409) Calling .GetSSHHostname
I0912 21:55:43.013622   25449 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:43.014036   25449 main.go:141] libmachine: (functional-657409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:3d:48", ip: ""} in network mk-functional-657409: {Iface:virbr1 ExpiryTime:2024-09-12 22:48:01 +0000 UTC Type:0 Mac:52:54:00:38:3d:48 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-657409 Clientid:01:52:54:00:38:3d:48}
I0912 21:55:43.014058   25449 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined IP address 192.168.39.239 and MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:43.014212   25449 main.go:141] libmachine: (functional-657409) Calling .GetSSHPort
I0912 21:55:43.014373   25449 main.go:141] libmachine: (functional-657409) Calling .GetSSHKeyPath
I0912 21:55:43.014524   25449 main.go:141] libmachine: (functional-657409) Calling .GetSSHUsername
I0912 21:55:43.014640   25449 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/functional-657409/id_rsa Username:docker}
I0912 21:55:43.118732   25449 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 21:55:43.185335   25449 main.go:141] libmachine: Making call to close driver server
I0912 21:55:43.185347   25449 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:43.185584   25449 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:43.185600   25449 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:43.185633   25449 main.go:141] libmachine: Making call to close driver server
I0912 21:55:43.185643   25449 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:43.185847   25449 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:43.185865   25449 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:43.185930   25449 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657409 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: b480b9e76883b2edced2d125c9bd41f997265d1f338349c193657333331fbff0
repoDigests:
- localhost/minikube-local-cache-test@sha256:bb1def27b55c2dc7e20118f9edd10be481c63f21a5d793ce0df5ca9218c482e6
repoTags:
- localhost/minikube-local-cache-test:functional-657409
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-657409
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657409 image ls --format yaml --alsologtostderr:
I0912 21:55:42.370387   25283 out.go:345] Setting OutFile to fd 1 ...
I0912 21:55:42.370522   25283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.370531   25283 out.go:358] Setting ErrFile to fd 2...
I0912 21:55:42.370538   25283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.370740   25283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
I0912 21:55:42.371308   25283 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.371447   25283 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.371879   25283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.371932   25283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.387299   25283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
I0912 21:55:42.388004   25283 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.388621   25283 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.388649   25283 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.389012   25283 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.389273   25283 main.go:141] libmachine: (functional-657409) Calling .GetState
I0912 21:55:42.391436   25283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.391484   25283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.408894   25283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
I0912 21:55:42.409404   25283 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.409901   25283 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.409921   25283 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.410226   25283 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.410525   25283 main.go:141] libmachine: (functional-657409) Calling .DriverName
I0912 21:55:42.410741   25283 ssh_runner.go:195] Run: systemctl --version
I0912 21:55:42.410778   25283 main.go:141] libmachine: (functional-657409) Calling .GetSSHHostname
I0912 21:55:42.413434   25283 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.413959   25283 main.go:141] libmachine: (functional-657409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:3d:48", ip: ""} in network mk-functional-657409: {Iface:virbr1 ExpiryTime:2024-09-12 22:48:01 +0000 UTC Type:0 Mac:52:54:00:38:3d:48 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-657409 Clientid:01:52:54:00:38:3d:48}
I0912 21:55:42.413990   25283 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined IP address 192.168.39.239 and MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.414107   25283 main.go:141] libmachine: (functional-657409) Calling .GetSSHPort
I0912 21:55:42.414260   25283 main.go:141] libmachine: (functional-657409) Calling .GetSSHKeyPath
I0912 21:55:42.414409   25283 main.go:141] libmachine: (functional-657409) Calling .GetSSHUsername
I0912 21:55:42.414556   25283 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/functional-657409/id_rsa Username:docker}
I0912 21:55:42.547147   25283 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 21:55:42.597298   25283 main.go:141] libmachine: Making call to close driver server
I0912 21:55:42.597308   25283 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:42.597676   25283 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:42.597664   25283 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:42.597690   25283 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:42.597742   25283 main.go:141] libmachine: Making call to close driver server
I0912 21:55:42.597751   25283 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:42.597935   25283 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:42.597960   25283 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh pgrep buildkitd: exit status 1 (192.68861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image build -t localhost/my-image:functional-657409 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 image build -t localhost/my-image:functional-657409 testdata/build --alsologtostderr: (4.29823833s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-657409 image build -t localhost/my-image:functional-657409 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3017a840787
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-657409
--> 5d6de0e2e15
Successfully tagged localhost/my-image:functional-657409
5d6de0e2e15a04ab89e6774dc9ec4f29d0bf3f0e017285a20739ee4311f40f52
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-657409 image build -t localhost/my-image:functional-657409 testdata/build --alsologtostderr:
I0912 21:55:42.838741   25413 out.go:345] Setting OutFile to fd 1 ...
I0912 21:55:42.839001   25413 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.839016   25413 out.go:358] Setting ErrFile to fd 2...
I0912 21:55:42.839022   25413 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:55:42.839517   25413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
I0912 21:55:42.840397   25413 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.840928   25413 config.go:182] Loaded profile config "functional-657409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0912 21:55:42.841386   25413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.841441   25413 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.856766   25413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
I0912 21:55:42.857234   25413 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.857822   25413 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.857842   25413 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.858187   25413 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.858350   25413 main.go:141] libmachine: (functional-657409) Calling .GetState
I0912 21:55:42.860084   25413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0912 21:55:42.860124   25413 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 21:55:42.875444   25413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
I0912 21:55:42.875835   25413 main.go:141] libmachine: () Calling .GetVersion
I0912 21:55:42.876397   25413 main.go:141] libmachine: Using API Version  1
I0912 21:55:42.876419   25413 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 21:55:42.876761   25413 main.go:141] libmachine: () Calling .GetMachineName
I0912 21:55:42.876938   25413 main.go:141] libmachine: (functional-657409) Calling .DriverName
I0912 21:55:42.877161   25413 ssh_runner.go:195] Run: systemctl --version
I0912 21:55:42.877196   25413 main.go:141] libmachine: (functional-657409) Calling .GetSSHHostname
I0912 21:55:42.880601   25413 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.881106   25413 main.go:141] libmachine: (functional-657409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:3d:48", ip: ""} in network mk-functional-657409: {Iface:virbr1 ExpiryTime:2024-09-12 22:48:01 +0000 UTC Type:0 Mac:52:54:00:38:3d:48 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:functional-657409 Clientid:01:52:54:00:38:3d:48}
I0912 21:55:42.881139   25413 main.go:141] libmachine: (functional-657409) DBG | domain functional-657409 has defined IP address 192.168.39.239 and MAC address 52:54:00:38:3d:48 in network mk-functional-657409
I0912 21:55:42.881480   25413 main.go:141] libmachine: (functional-657409) Calling .GetSSHPort
I0912 21:55:42.881690   25413 main.go:141] libmachine: (functional-657409) Calling .GetSSHKeyPath
I0912 21:55:42.881855   25413 main.go:141] libmachine: (functional-657409) Calling .GetSSHUsername
I0912 21:55:42.882070   25413 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/functional-657409/id_rsa Username:docker}
I0912 21:55:42.969043   25413 build_images.go:161] Building image from path: /tmp/build.1563057856.tar
I0912 21:55:42.969099   25413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 21:55:42.980208   25413 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1563057856.tar
I0912 21:55:42.984227   25413 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1563057856.tar: stat -c "%s %y" /var/lib/minikube/build/build.1563057856.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1563057856.tar': No such file or directory
I0912 21:55:42.984264   25413 ssh_runner.go:362] scp /tmp/build.1563057856.tar --> /var/lib/minikube/build/build.1563057856.tar (3072 bytes)
I0912 21:55:43.010776   25413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1563057856
I0912 21:55:43.042620   25413 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1563057856 -xf /var/lib/minikube/build/build.1563057856.tar
I0912 21:55:43.062525   25413 crio.go:315] Building image: /var/lib/minikube/build/build.1563057856
I0912 21:55:43.062593   25413 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-657409 /var/lib/minikube/build/build.1563057856 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0912 21:55:47.033800   25413 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-657409 /var/lib/minikube/build/build.1563057856 --cgroup-manager=cgroupfs: (3.971183229s)
I0912 21:55:47.033869   25413 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1563057856
I0912 21:55:47.049185   25413 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1563057856.tar
I0912 21:55:47.089820   25413 build_images.go:217] Built localhost/my-image:functional-657409 from /tmp/build.1563057856.tar
I0912 21:55:47.089867   25413 build_images.go:133] succeeded building to: functional-657409
I0912 21:55:47.089874   25413 build_images.go:134] failed building to: 
I0912 21:55:47.089949   25413 main.go:141] libmachine: Making call to close driver server
I0912 21:55:47.089965   25413 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:47.090255   25413 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:47.090273   25413 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 21:55:47.090307   25413 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:47.090375   25413 main.go:141] libmachine: Making call to close driver server
I0912 21:55:47.090389   25413 main.go:141] libmachine: (functional-657409) Calling .Close
I0912 21:55:47.090646   25413 main.go:141] libmachine: Successfully made call to close driver server
I0912 21:55:47.090661   25413 main.go:141] libmachine: (functional-657409) DBG | Closing plugin on server side
I0912 21:55:47.090663   25413 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
2024/09/12 21:55:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.724208989s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-657409
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-657409 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-657409 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nvzjm" [ac8d5150-3263-42c5-b50b-5adb13feb4f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nvzjm" [ac8d5150-3263-42c5-b50b-5adb13feb4f5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003102009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image load --daemon kicbase/echo-server:functional-657409 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 image load --daemon kicbase/echo-server:functional-657409 --alsologtostderr: (3.097766468s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image load --daemon kicbase/echo-server:functional-657409 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-657409
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image load --daemon kicbase/echo-server:functional-657409 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image save kicbase/echo-server:functional-657409 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image rm kicbase/echo-server:functional-657409 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-657409
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 image save --daemon kicbase/echo-server:functional-657409 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-657409 image save --daemon kicbase/echo-server:functional-657409 --alsologtostderr: (3.429815286s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-657409
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service list -o json
functional_test.go:1494: Took "353.705568ms" to run "out/minikube-linux-amd64 -p functional-657409 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.239:30922
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.239:30922
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdany-port3322342480/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726178119736514989" to /tmp/TestFunctionalparallelMountCmdany-port3322342480/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726178119736514989" to /tmp/TestFunctionalparallelMountCmdany-port3322342480/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726178119736514989" to /tmp/TestFunctionalparallelMountCmdany-port3322342480/001/test-1726178119736514989
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.956431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 21:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 21:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 21:55 test-1726178119736514989
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh cat /mount-9p/test-1726178119736514989
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-657409 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [86c852f3-d561-467e-8964-921a8eae64c8] Pending
helpers_test.go:344: "busybox-mount" [86c852f3-d561-467e-8964-921a8eae64c8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [86c852f3-d561-467e-8964-921a8eae64c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [86c852f3-d561-467e-8964-921a8eae64c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.012312354s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-657409 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdany-port3322342480/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "234.672163ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.464613ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "205.353299ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.38008ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdspecific-port2166276236/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.032097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdspecific-port2166276236/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "sudo umount -f /mount-9p": exit status 1 (242.13954ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-657409 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdspecific-port2166276236/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T" /mount1: exit status 1 (299.86322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-657409 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-657409 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-657409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234949297/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-657409
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-657409
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-657409
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-475401 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0912 21:57:07.199170   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-475401 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.361101403s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-475401 -- rollout status deployment/busybox: (4.62801073s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-gb2hg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-l2hdm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-t7gjx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-gb2hg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-l2hdm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-t7gjx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-gb2hg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-l2hdm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-t7gjx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-gb2hg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-gb2hg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-l2hdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-l2hdm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-t7gjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-475401 -- exec busybox-7dff88458-t7gjx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-475401 -v=7 --alsologtostderr
E0912 22:00:05.703541   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:05.709962   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:05.722097   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:05.743376   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:05.785334   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:05.866811   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:06.028127   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:06.349958   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:06.991696   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:08.273025   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:00:10.835253   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-475401 -v=7 --alsologtostderr: (54.130924022s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-475401 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp testdata/cp-test.txt ha-475401:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401:/home/docker/cp-test.txt ha-475401-m02:/home/docker/cp-test_ha-475401_ha-475401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test_ha-475401_ha-475401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401:/home/docker/cp-test.txt ha-475401-m03:/home/docker/cp-test_ha-475401_ha-475401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test_ha-475401_ha-475401-m03.txt"
E0912 22:00:15.957128   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401:/home/docker/cp-test.txt ha-475401-m04:/home/docker/cp-test_ha-475401_ha-475401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test_ha-475401_ha-475401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp testdata/cp-test.txt ha-475401-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m02:/home/docker/cp-test.txt ha-475401:/home/docker/cp-test_ha-475401-m02_ha-475401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test_ha-475401-m02_ha-475401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m02:/home/docker/cp-test.txt ha-475401-m03:/home/docker/cp-test_ha-475401-m02_ha-475401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test_ha-475401-m02_ha-475401-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m02:/home/docker/cp-test.txt ha-475401-m04:/home/docker/cp-test_ha-475401-m02_ha-475401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test_ha-475401-m02_ha-475401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp testdata/cp-test.txt ha-475401-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt ha-475401:/home/docker/cp-test_ha-475401-m03_ha-475401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test_ha-475401-m03_ha-475401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt ha-475401-m02:/home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test_ha-475401-m03_ha-475401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m03:/home/docker/cp-test.txt ha-475401-m04:/home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test_ha-475401-m03_ha-475401-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp testdata/cp-test.txt ha-475401-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1750943762/001/cp-test_ha-475401-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt ha-475401:/home/docker/cp-test_ha-475401-m04_ha-475401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401 "sudo cat /home/docker/cp-test_ha-475401-m04_ha-475401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt ha-475401-m02:/home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m02 "sudo cat /home/docker/cp-test_ha-475401-m04_ha-475401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 cp ha-475401-m04:/home/docker/cp-test.txt ha-475401-m03:/home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 ssh -n ha-475401-m03 "sudo cat /home/docker/cp-test_ha-475401-m04_ha-475401-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0912 22:02:49.566924   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.481728508s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 node delete m03 -v=7 --alsologtostderr
E0912 22:10:05.704038   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-475401 node delete m03 -v=7 --alsologtostderr: (15.981770533s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (379.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-475401 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0912 22:15:05.703685   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:16:28.770706   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:17:07.200089   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-475401 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m19.166658187s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (379.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-475401 --control-plane -v=7 --alsologtostderr
E0912 22:20:05.704557   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:20:10.269601   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-475401 --control-plane -v=7 --alsologtostderr: (1m14.007803473s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-475401 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-369645 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-369645 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (48.174402809s)
--- PASS: TestJSONOutput/start/Command (48.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-369645 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-369645 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-369645 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-369645 --output=json --user=testUser: (6.672403086s)
--- PASS: TestJSONOutput/stop/Command (6.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-120400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-120400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.063434ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdc06f31-497b-446e-b248-75cd134ae637","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-120400] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6cfaa9b-447d-4827-bc52-4235ee4499ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"df3c7e3c-1174-4752-b5c6-631fed27f6c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e23586d2-d363-46e4-8305-c980ef089e17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig"}}
	{"specversion":"1.0","id":"07f9a77d-7f25-4377-be4b-c8582ce352f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube"}}
	{"specversion":"1.0","id":"d315f99d-e066-4c53-98f8-895971a1571a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa3c3b36-dbcb-4150-91ba-6c4d113bda87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b3e5958-2780-4516-a137-b2c3e53d80ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-120400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-120400
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-371512 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-371512 --driver=kvm2  --container-runtime=crio: (42.921070392s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-374956 --driver=kvm2  --container-runtime=crio
E0912 22:22:07.200088   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-374956 --driver=kvm2  --container-runtime=crio: (43.428928114s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-371512
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-374956
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-374956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-374956
helpers_test.go:175: Cleaning up "first-371512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-371512
--- PASS: TestMinikubeProfile (88.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-768237 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-768237 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.934220731s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-768237 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-768237 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-784708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-784708 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.878816719s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-768237 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-784708
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-784708: (1.271682364s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-784708
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-784708: (22.271421467s)
--- PASS: TestMountStart/serial/RestartStopped (23.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-784708 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-768483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0912 22:25:05.704490   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-768483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.38732759s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-768483 -- rollout status deployment/busybox: (4.322940646s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-2jcd4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-p7bhb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-2jcd4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-p7bhb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-2jcd4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-p7bhb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-2jcd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-2jcd4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-p7bhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-768483 -- exec busybox-7dff88458-p7bhb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-768483 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-768483 -v 3 --alsologtostderr: (46.729066187s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-768483 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp testdata/cp-test.txt multinode-768483:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483:/home/docker/cp-test.txt multinode-768483-m02:/home/docker/cp-test_multinode-768483_multinode-768483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test_multinode-768483_multinode-768483-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483:/home/docker/cp-test.txt multinode-768483-m03:/home/docker/cp-test_multinode-768483_multinode-768483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test_multinode-768483_multinode-768483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp testdata/cp-test.txt multinode-768483-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt multinode-768483:/home/docker/cp-test_multinode-768483-m02_multinode-768483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test_multinode-768483-m02_multinode-768483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m02:/home/docker/cp-test.txt multinode-768483-m03:/home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test_multinode-768483-m02_multinode-768483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp testdata/cp-test.txt multinode-768483-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696931795/001/cp-test_multinode-768483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt multinode-768483:/home/docker/cp-test_multinode-768483-m03_multinode-768483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483 "sudo cat /home/docker/cp-test_multinode-768483-m03_multinode-768483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 cp multinode-768483-m03:/home/docker/cp-test.txt multinode-768483-m02:/home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m03 "sudo cat /home/docker/cp-test.txt"
E0912 22:27:07.199627   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 ssh -n multinode-768483-m02 "sudo cat /home/docker/cp-test_multinode-768483-m03_multinode-768483-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-768483 node stop m03: (1.380647757s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-768483 status: exit status 7 (420.05963ms)

                                                
                                                
-- stdout --
	multinode-768483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-768483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-768483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr: exit status 7 (411.245237ms)

                                                
                                                
-- stdout --
	multinode-768483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-768483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-768483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:27:09.310239   43234 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:27:09.310492   43234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:27:09.310501   43234 out.go:358] Setting ErrFile to fd 2...
	I0912 22:27:09.310505   43234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:27:09.310684   43234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:27:09.310838   43234 out.go:352] Setting JSON to false
	I0912 22:27:09.310866   43234 mustload.go:65] Loading cluster: multinode-768483
	I0912 22:27:09.310972   43234 notify.go:220] Checking for updates...
	I0912 22:27:09.311203   43234 config.go:182] Loaded profile config "multinode-768483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:27:09.311219   43234 status.go:255] checking status of multinode-768483 ...
	I0912 22:27:09.311610   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.311649   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.330193   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I0912 22:27:09.330812   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.331455   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.331494   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.331813   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.332004   43234 main.go:141] libmachine: (multinode-768483) Calling .GetState
	I0912 22:27:09.333978   43234 status.go:330] multinode-768483 host status = "Running" (err=<nil>)
	I0912 22:27:09.333998   43234 host.go:66] Checking if "multinode-768483" exists ...
	I0912 22:27:09.334321   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.334364   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.350210   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0912 22:27:09.350641   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.351100   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.351122   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.351439   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.351628   43234 main.go:141] libmachine: (multinode-768483) Calling .GetIP
	I0912 22:27:09.354123   43234 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:27:09.354627   43234 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:27:09.354655   43234 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:27:09.354768   43234 host.go:66] Checking if "multinode-768483" exists ...
	I0912 22:27:09.355039   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.355076   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.370192   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0912 22:27:09.370749   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.371228   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.371247   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.371608   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.371820   43234 main.go:141] libmachine: (multinode-768483) Calling .DriverName
	I0912 22:27:09.372049   43234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:27:09.372074   43234 main.go:141] libmachine: (multinode-768483) Calling .GetSSHHostname
	I0912 22:27:09.374933   43234 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:27:09.375396   43234 main.go:141] libmachine: (multinode-768483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c3:ae", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:24:27 +0000 UTC Type:0 Mac:52:54:00:e5:c3:ae Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-768483 Clientid:01:52:54:00:e5:c3:ae}
	I0912 22:27:09.375434   43234 main.go:141] libmachine: (multinode-768483) DBG | domain multinode-768483 has defined IP address 192.168.39.28 and MAC address 52:54:00:e5:c3:ae in network mk-multinode-768483
	I0912 22:27:09.375572   43234 main.go:141] libmachine: (multinode-768483) Calling .GetSSHPort
	I0912 22:27:09.375739   43234 main.go:141] libmachine: (multinode-768483) Calling .GetSSHKeyPath
	I0912 22:27:09.375882   43234 main.go:141] libmachine: (multinode-768483) Calling .GetSSHUsername
	I0912 22:27:09.376028   43234 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483/id_rsa Username:docker}
	I0912 22:27:09.460325   43234 ssh_runner.go:195] Run: systemctl --version
	I0912 22:27:09.466236   43234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:27:09.480704   43234 kubeconfig.go:125] found "multinode-768483" server: "https://192.168.39.28:8443"
	I0912 22:27:09.480737   43234 api_server.go:166] Checking apiserver status ...
	I0912 22:27:09.480769   43234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:27:09.493499   43234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup
	W0912 22:27:09.502687   43234 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:27:09.502747   43234 ssh_runner.go:195] Run: ls
	I0912 22:27:09.507297   43234 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I0912 22:27:09.511342   43234 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I0912 22:27:09.511367   43234 status.go:422] multinode-768483 apiserver status = Running (err=<nil>)
	I0912 22:27:09.511379   43234 status.go:257] multinode-768483 status: &{Name:multinode-768483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:27:09.511401   43234 status.go:255] checking status of multinode-768483-m02 ...
	I0912 22:27:09.511702   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.511739   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.526980   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0912 22:27:09.527456   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.527919   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.527938   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.528262   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.528464   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetState
	I0912 22:27:09.530148   43234 status.go:330] multinode-768483-m02 host status = "Running" (err=<nil>)
	I0912 22:27:09.530167   43234 host.go:66] Checking if "multinode-768483-m02" exists ...
	I0912 22:27:09.530543   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.530601   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.545341   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0912 22:27:09.545711   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.546089   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.546131   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.546427   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.546631   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetIP
	I0912 22:27:09.549321   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | domain multinode-768483-m02 has defined MAC address 52:54:00:81:b8:ed in network mk-multinode-768483
	I0912 22:27:09.549733   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:b8:ed", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:25:26 +0000 UTC Type:0 Mac:52:54:00:81:b8:ed Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:multinode-768483-m02 Clientid:01:52:54:00:81:b8:ed}
	I0912 22:27:09.549772   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | domain multinode-768483-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:81:b8:ed in network mk-multinode-768483
	I0912 22:27:09.549872   43234 host.go:66] Checking if "multinode-768483-m02" exists ...
	I0912 22:27:09.550272   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.550323   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.565027   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0912 22:27:09.565566   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.566064   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.566085   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.566379   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.566572   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .DriverName
	I0912 22:27:09.566780   43234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:27:09.566802   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetSSHHostname
	I0912 22:27:09.569282   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | domain multinode-768483-m02 has defined MAC address 52:54:00:81:b8:ed in network mk-multinode-768483
	I0912 22:27:09.569694   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:b8:ed", ip: ""} in network mk-multinode-768483: {Iface:virbr1 ExpiryTime:2024-09-12 23:25:26 +0000 UTC Type:0 Mac:52:54:00:81:b8:ed Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:multinode-768483-m02 Clientid:01:52:54:00:81:b8:ed}
	I0912 22:27:09.569713   43234 main.go:141] libmachine: (multinode-768483-m02) DBG | domain multinode-768483-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:81:b8:ed in network mk-multinode-768483
	I0912 22:27:09.569836   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetSSHPort
	I0912 22:27:09.570016   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetSSHKeyPath
	I0912 22:27:09.570251   43234 main.go:141] libmachine: (multinode-768483-m02) Calling .GetSSHUsername
	I0912 22:27:09.570415   43234 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19616-5891/.minikube/machines/multinode-768483-m02/id_rsa Username:docker}
	I0912 22:27:09.648222   43234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:27:09.661198   43234 status.go:257] multinode-768483-m02 status: &{Name:multinode-768483-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:27:09.661241   43234 status.go:255] checking status of multinode-768483-m03 ...
	I0912 22:27:09.661563   43234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0912 22:27:09.661604   43234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 22:27:09.676765   43234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0912 22:27:09.677171   43234 main.go:141] libmachine: () Calling .GetVersion
	I0912 22:27:09.677660   43234 main.go:141] libmachine: Using API Version  1
	I0912 22:27:09.677681   43234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 22:27:09.678001   43234 main.go:141] libmachine: () Calling .GetMachineName
	I0912 22:27:09.678196   43234 main.go:141] libmachine: (multinode-768483-m03) Calling .GetState
	I0912 22:27:09.679924   43234 status.go:330] multinode-768483-m03 host status = "Stopped" (err=<nil>)
	I0912 22:27:09.679939   43234 status.go:343] host is not running, skipping remaining checks
	I0912 22:27:09.679947   43234 status.go:257] multinode-768483-m03 status: &{Name:multinode-768483-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-768483 node start m03 -v=7 --alsologtostderr: (38.27953785s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-768483 node delete m03: (1.479088825s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (177.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-768483 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0912 22:36:50.271181   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:37:07.199705   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-768483 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m56.522410009s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-768483 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (177.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-768483
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-768483-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-768483-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.869562ms)

                                                
                                                
-- stdout --
	* [multinode-768483-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-768483-m02' is duplicated with machine name 'multinode-768483-m02' in profile 'multinode-768483'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-768483-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-768483-m03 --driver=kvm2  --container-runtime=crio: (41.531181448s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-768483
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-768483: exit status 80 (200.614948ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-768483 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-768483-m03 already exists in multinode-768483-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-768483-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.83s)

                                                
                                    
x
+
TestScheduledStopUnix (110.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-595344 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-595344 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.59828377s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-595344 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-595344 -n scheduled-stop-595344
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-595344 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-595344 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-595344 -n scheduled-stop-595344
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-595344
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-595344 --schedule 15s
E0912 22:45:05.704055   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-595344
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-595344: exit status 7 (61.412978ms)

                                                
                                                
-- stdout --
	scheduled-stop-595344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-595344 -n scheduled-stop-595344
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-595344 -n scheduled-stop-595344: exit status 7 (61.766298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-595344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-595344
--- PASS: TestScheduledStopUnix (110.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2311020733 start -p running-upgrade-753974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2311020733 start -p running-upgrade-753974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m20.972840733s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-753974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-753974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.595947173s)
helpers_test.go:175: Cleaning up "running-upgrade-753974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-753974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-753974: (1.522598065s)
--- PASS: TestRunningBinaryUpgrade (210.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (170.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3411802434 start -p stopped-upgrade-645192 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0912 22:47:07.199652   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3411802434 start -p stopped-upgrade-645192 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m39.516214632s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3411802434 -p stopped-upgrade-645192 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3411802434 -p stopped-upgrade-645192 stop: (1.630747688s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-645192 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-645192 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.582162232s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (170.73s)

                                                
                                    
x
+
TestPause/serial/Start (72.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-531966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-531966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.623153054s)
--- PASS: TestPause/serial/Start (72.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-531966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-531966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.479496866s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-645192
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (70.035468ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204793] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204793 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204793 --driver=kvm2  --container-runtime=crio: (45.454051856s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204793 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.72s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-531966 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-531966 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-531966 --output=json --layout=cluster: exit status 2 (247.889382ms)

                                                
                                                
-- stdout --
	{"Name":"pause-531966","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-531966","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-531966 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-531966 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-531966 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-531966 --alsologtostderr -v=5: (1.041061769s)
--- PASS: TestPause/serial/DeletePaused (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.208808256s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-938961 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-938961 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (1.998960505s)

                                                
                                                
-- stdout --
	* [false-938961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:49:17.337160   53675 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:49:17.337306   53675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:49:17.337319   53675 out.go:358] Setting ErrFile to fd 2...
	I0912 22:49:17.337325   53675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:49:17.337635   53675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5891/.minikube/bin
	I0912 22:49:17.338482   53675 out.go:352] Setting JSON to false
	I0912 22:49:17.339855   53675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5499,"bootTime":1726175858,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:49:17.339960   53675 start.go:139] virtualization: kvm guest
	I0912 22:49:17.422194   53675 out.go:177] * [false-938961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:49:17.552194   53675 notify.go:220] Checking for updates...
	I0912 22:49:17.733303   53675 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:49:17.860744   53675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:49:17.982038   53675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5891/kubeconfig
	I0912 22:49:18.114882   53675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5891/.minikube
	I0912 22:49:18.274025   53675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:49:18.550459   53675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:49:18.657725   53675 config.go:182] Loaded profile config "NoKubernetes-204793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0912 22:49:18.657894   53675 config.go:182] Loaded profile config "kubernetes-upgrade-848420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0912 22:49:18.658015   53675 config.go:182] Loaded profile config "running-upgrade-753974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0912 22:49:18.658117   53675 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:49:18.774605   53675 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 22:49:18.861201   53675 start.go:297] selected driver: kvm2
	I0912 22:49:18.861245   53675 start.go:901] validating driver "kvm2" against <nil>
	I0912 22:49:18.861264   53675 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:49:18.956318   53675 out.go:201] 
	W0912 22:49:19.061798   53675 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0912 22:49:19.165571   53675 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-938961 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-938961

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-938961"

                                                
                                                
----------------------- debugLogs end: false-938961 [took: 3.535719333s] --------------------------------
helpers_test.go:175: Cleaning up "false-938961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-938961
--- PASS: TestNetworkPlugins/group/false (5.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0912 22:49:48.774489   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.522461935s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-204793 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-204793 status -o json: exit status 2 (234.989695ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-204793","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-204793
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-204793: (1.069892172s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204793 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.20412798s)
--- PASS: TestNoKubernetes/serial/Start (44.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204793 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204793 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.367495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.45071598s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.07128762s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-204793
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-204793: (1.303531724s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (58.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-204793 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-204793 --driver=kvm2  --container-runtime=crio: (58.140231602s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (58.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-204793 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-204793 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.234927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-702201 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-702201 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m23.114172976s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (116.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-378112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0912 22:53:30.272878   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-378112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m56.775554213s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (116.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75003da8-0a8c-4bb7-81ff-b28a3f686b98] Pending
helpers_test.go:344: "busybox" [75003da8-0a8c-4bb7-81ff-b28a3f686b98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75003da8-0a8c-4bb7-81ff-b28a3f686b98] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.003398717s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-702201 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-702201 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-837491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-837491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (45.034548621s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-378112 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [68c26c3e-1c5b-4b9c-8316-020988da7706] Pending
helpers_test.go:344: "busybox" [68c26c3e-1c5b-4b9c-8316-020988da7706] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [68c26c3e-1c5b-4b9c-8316-020988da7706] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004857523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-378112 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-378112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-378112 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-837491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-837491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014872662s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-837491 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-837491 --alsologtostderr -v=3: (2.287422584s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837491 -n newest-cni-837491
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837491 -n newest-cni-837491: exit status 7 (60.020629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-837491 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-837491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-837491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.275770318s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837491 -n newest-cni-837491
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-837491 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-837491 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837491 -n newest-cni-837491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837491 -n newest-cni-837491: exit status 2 (224.763973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-837491 -n newest-cni-837491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-837491 -n newest-cni-837491: exit status 2 (229.065946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-837491 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837491 -n newest-cni-837491
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-837491 -n newest-cni-837491
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-380092 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-380092 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m40.01273776s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (671.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-702201 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-702201 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m11.721068497s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-702201 -n default-k8s-diff-port-702201
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (671.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (522.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-378112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-378112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m42.478911941s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-378112 -n embed-certs-378112
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (522.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-380092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d6e0a88-c74b-4cce-b218-5f7cdb45fc70] Pending
helpers_test.go:344: "busybox" [3d6e0a88-c74b-4cce-b218-5f7cdb45fc70] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d6e0a88-c74b-4cce-b218-5f7cdb45fc70] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004339582s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-380092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-380092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-380092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-642238 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-642238 --alsologtostderr -v=3: (3.280824429s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642238 -n old-k8s-version-642238: exit status 7 (62.91146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-642238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (423.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-380092 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0912 23:02:07.199199   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/addons-694635/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:05:05.703548   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-380092 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (7m2.945541891s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-380092 -n no-preload-380092
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (423.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (52.622971424s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.497650279s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m46.028735274s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4d9vd" [87b53cb9-2f42-4c09-934e-0e16b5123cdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:23:08.778122   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4d9vd" [87b53cb9-2f42-4c09-934e-0e16b5123cdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004458276s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0912 23:23:36.394674   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.401047   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.412415   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.434063   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.475588   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.557309   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:36.719518   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:37.041462   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:37.683259   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:38.964721   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.886846993s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q2zj5" [322562e3-2bfe-4c8c-9887-64e94b4db9ee] Running
E0912 23:23:41.526384   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:23:46.648030   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003195568s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bvnlb" [19ef2d00-810d-4852-861d-d49fb33fff11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bvnlb" [19ef2d00-810d-4852-861d-d49fb33fff11] Running
E0912 23:23:56.889766   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004006616s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0912 23:24:17.371175   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/default-k8s-diff-port-702201/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m4.809715076s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-khtf9" [b269d928-b6a5-42c2-8718-5b1c241fcd66] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005100219s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5p7lt" [7979448f-a5d4-470a-9246-66552c5e486d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5p7lt" [7979448f-a5d4-470a-9246-66552c5e486d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004083601s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x6hv7" [68cc3029-f8d0-4cdc-8e4f-066468699769] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:24:49.487201   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.493714   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.505392   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.526821   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.568326   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.651212   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:49.814758   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:50.137082   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:50.779017   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-x6hv7" [68cc3029-f8d0-4cdc-8e4f-066468699769] Running
E0912 23:24:52.060927   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:54.623181   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004920959s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0912 23:25:05.704503   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/functional-657409/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:25:09.986894   13083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/old-k8s-version-642238/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m34.72812831s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-938961 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.229805076s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2qk6g" [4ef752fd-107b-4a10-97a3-5bb598ef0c17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003506376s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2svgj" [2623a87d-d5db-4784-8142-f694c10ce98f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2svgj" [2623a87d-d5db-4784-8142-f694c10ce98f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005074768s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-380092 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-380092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092: exit status 2 (264.709827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-380092 -n no-preload-380092: exit status 2 (265.992875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-380092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-380092 --alsologtostderr -v=1: (1.042218635s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-380092 -n no-preload-380092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-380092 -n no-preload-380092
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k2xlr" [64649b40-ce4a-4a8e-bc40-1ed7f0cf7209] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k2xlr" [64649b40-ce4a-4a8e-bc40-1ed7f0cf7209] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004322169s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-938961 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-938961 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-glnqf" [859dbdae-78e3-4406-91c9-99eca6407882] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-glnqf" [859dbdae-78e3-4406-91c9-99eca6407882] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00403752s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-938961 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-938961 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (37/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.14
274 TestNetworkPlugins/group/kubenet 3.01
282 TestNetworkPlugins/group/cilium 3.61
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-457722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-457722
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-938961 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-5891/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 22:48:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.52:8443
name: running-upgrade-753974
contexts:
- context:
cluster: running-upgrade-753974
extensions:
- extension:
last-update: Thu, 12 Sep 2024 22:48:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: running-upgrade-753974
name: running-upgrade-753974
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-753974
user:
client-certificate: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/running-upgrade-753974/client.crt
client-key: /home/jenkins/minikube-integration/19616-5891/.minikube/profiles/running-upgrade-753974/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-938961

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-938961"

                                                
                                                
----------------------- debugLogs end: kubenet-938961 [took: 2.857215096s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-938961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-938961
--- SKIP: TestNetworkPlugins/group/kubenet (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-938961 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-938961" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-938961

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-938961" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938961"

                                                
                                                
----------------------- debugLogs end: cilium-938961 [took: 3.469261184s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-938961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-938961
--- SKIP: TestNetworkPlugins/group/cilium (3.61s)

                                                
                                    
Copied to clipboard